Sample records for vector space projections

  1. On orthogonal expansions of the space of vector functions which are square-summable over a given domain and the vector analysis operators

    NASA Technical Reports Server (NTRS)

    Bykhovskiy, E. B.; Smirnov, N. V.

    1983-01-01

    The Hilbert space L2(omega) of vector functions is studied. A breakdown of L2(omega) into orthogonal subspaces is discussed and the properties of the operators for projection onto these subspaces are investigated from the standpoint of preserving the differential properties of the vectors being projected. Finally, the properties of the operators are examined.

  2. Human action classification using procrustes shape theory

    NASA Astrophysics Data System (ADS)

    Cho, Wanhyun; Kim, Sangkyoon; Park, Soonyoung; Lee, Myungeun

    2015-02-01

    In this paper, we propose new method that can classify a human action using Procrustes shape theory. First, we extract a pre-shape configuration vector of landmarks from each frame of an image sequence representing an arbitrary human action, and then we have derived the Procrustes fit vector for pre-shape configuration vector. Second, we extract a set of pre-shape vectors from tanning sample stored at database, and we compute a Procrustes mean shape vector for these preshape vectors. Third, we extract a sequence of the pre-shape vectors from input video, and we project this sequence of pre-shape vectors on the tangent space with respect to the pole taking as a sequence of mean shape vectors corresponding with a target video. And we calculate the Procrustes distance between two sequences of the projection pre-shape vectors on the tangent space and the mean shape vectors. Finally, we classify the input video into the human action class with minimum Procrustes distance. We assess a performance of the proposed method using one public dataset, namely Weizmann human action dataset. Experimental results reveal that the proposed method performs very good on this dataset.

  3. Projective mappings and dimensions of vector spaces of three types of Killing-Yano tensors on pseudo Riemannian manifolds of constant curvature

    NASA Astrophysics Data System (ADS)

    Mikeš, Josef; Stepanov, Sergey; Hinterleitner, Irena

    2012-07-01

    In our paper we have determined the dimension of the space of conformal Killing-Yano tensors and the dimensions of its two subspaces of closed conformal Killing-Yano and Killing-Yano tensors on pseudo Riemannian manifolds of constant curvature. This result is a generalization of well known results on sharp upper bounds of the dimensions of the vector spaces of conformal Killing-Yano, Killing-Yano and concircular vector fields on pseudo Riemannian manifolds of constant curvature.

  4. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    PubMed

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  5. Estimation of the chemical rank for the three-way data: a principal norm vector orthogonal projection approach.

    PubMed

    Hong-Ping, Xie; Jian-Hui, Jiang; Guo-Li, Shen; Ru-Qin, Yu

    2002-01-01

    A new approach for estimating the chemical rank of the three-way array called the principal norm vector orthogonal projection method has been proposed. The method is based on the fact that the chemical rank of the three-way data array is equal to one of the column space of the unfolded matrix along the spectral or chromatographic mode. A vector with maximum Frobenius norm is selected among all the column vectors of the unfolded matrix as the principal norm vector (PNV). A transformation is conducted for the column vectors with an orthogonal projection matrix formulated by PNV. The mathematical rank of the column space of the residual matrix thus obtained should decrease by one. Such orthogonal projection is carried out repeatedly till the contribution of chemical species to the signal data is all deleted. At this time the decrease of the mathematical rank would equal that of the chemical rank, and the remaining residual subspace would entirely be due to the noise contribution. The chemical rank can be estimated easily by using an F-test. The method has been used successfully to the simulated HPLC-DAD type three-way data array and two real excitation-emission fluorescence data sets of amino acid mixtures and dye mixtures. The simulation with added relatively high level noise shows that the method is robust in resisting the heteroscedastic noise. The proposed algorithm is simple and easy to program with quite light computational burden.

  6. A simple procedure for construction of the orthonormal basis vectors of irreducible representations of O(5) in the OT (3) ⊗ON (2) basis

    NASA Astrophysics Data System (ADS)

    Pan, Feng; Ding, Xiaoxue; Launey, Kristina D.; Draayer, J. P.

    2018-06-01

    A simple and effective algebraic isospin projection procedure for constructing orthonormal basis vectors of irreducible representations of O (5) ⊃OT (3) ⊗ON (2) from those in the canonical O (5) ⊃ SUΛ (2) ⊗ SUI (2) basis is outlined. The expansion coefficients are components of null space vectors of the projection matrix with four nonzero elements in each row in general. Explicit formulae for evaluating OT (3)-reduced matrix elements of O (5) generators are derived.

  7. Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets.

    PubMed

    Demartines, P; Herault, J

    1997-01-01

    We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.

  8. Projective formulation of Maggi's method for nonholonomic systems analysis

    NASA Astrophysics Data System (ADS)

    Blajer, Wojciech

    1992-04-01

    A projective interpretation of Maggi'a approach to dynamic analysis of nonholonomic systems is presented. Both linear and nonlinear constraint cases are treatment in unified fashion, using the language of vector spaces and tensor algebra analysis.

  9. Maxwell Equations and the Redundant Gauge Degree of Freedom

    ERIC Educational Resources Information Center

    Wong, Chun Wa

    2009-01-01

    On transformation to the Fourier space (k,[omega]), the partial differential Maxwell equations simplify to algebraic equations, and the Helmholtz theorem of vector calculus reduces to vector algebraic projections. Maxwell equations and their solutions can then be separated readily into longitudinal and transverse components relative to the…

  10. A comparative study of linear and nonlinear anomaly detectors for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Goldberg, Hirsh; Nasrabadi, Nasser M.

    2007-04-01

    In this paper we implement various linear and nonlinear subspace-based anomaly detectors for hyperspectral imagery. First, a dual window technique is used to separate the local area around each pixel into two regions - an inner-window region (IWR) and an outer-window region (OWR). Pixel spectra from each region are projected onto a subspace which is defined by projection bases that can be generated in several ways. Here we use three common pattern classification techniques (Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD) Analysis, and the Eigenspace Separation Transform (EST)) to generate projection vectors. In addition to these three algorithms, the well-known Reed-Xiaoli (RX) anomaly detector is also implemented. Each of the four linear methods is then implicitly defined in a high- (possibly infinite-) dimensional feature space by using a nonlinear mapping associated with a kernel function. Using a common machine-learning technique known as the kernel trick all dot products in the feature space are replaced with a Mercer kernel function defined in terms of the original input data space. To determine how anomalous a given pixel is, we then project the current test pixel spectra and the spectral mean vector of the OWR onto the linear and nonlinear projection vectors in order to exploit the statistical differences between the IWR and OWR pixels. Anomalies are detected if the separation of the projection of the current test pixel spectra and the OWR mean spectra are greater than a certain threshold. Comparisons are made using receiver operating characteristics (ROC) curves.

  11. Professor Herman Burger (1893-1965), eminent teacher and scientist, who laid the theoretical foundations of vectorcardiography--and electrocardiography.

    PubMed

    van Herpen, Gerard

    2014-01-01

    Einthoven not only designed a high quality instrument, the string galvanometer, for recording the ECG, he also shaped the conceptual framework to understand it. He reduced the body to an equilateral triangle and the cardiac electric activity to a dipole, represented by an arrow (i.e. a vector) in the triangle's center. Up to the present day the interpretation of the ECG is based on the model of a dipole vector being projected on the various leads. The model is practical but intuitive, not physically founded. Burger analysed the relation between heart vector and leads according to the principles of physics. It then follows that an ECG lead must be treated as a vector (lead vector) and that the lead voltage is not simply proportional to the projection of the vector on the lead, but must be multiplied by the value (length) of the lead vector, the lead strength. Anatomical lead axis and electrical lead axis are different entities and the anatomical body space must be distinguished from electrical space. Appreciation of these underlying physical principles should contribute to a better understanding of the ECG. The development of these principles by Burger is described, together with some personal notes and a sketch of the personality of this pioneer of medical physics. Copyright © 2014. Published by Elsevier Inc.

  12. Light weakly coupled axial forces: models, constraints, and projections

    DOE PAGES

    Kahn, Yonatan; Krnjaic, Gordan; Mishra-Sharma, Siddharth; ...

    2017-05-01

    Here, we investigate the landscape of constraints on MeV-GeV scale, hidden U(1) forces with nonzero axial-vector couplings to Standard Model fermions. While the purely vector-coupled dark photon, which may arise from kinetic mixing, is a well-motivated scenario, several MeV-scale anomalies motivate a theory with axial couplings which can be UV-completed consistent with Standard Model gauge invariance. Moreover, existing constraints on dark photons depend on products of various combinations of axial and vector couplings, making it difficult to isolate the e ects of axial couplings for particular flavors of SM fermions. We present a representative renormalizable, UV-complete model of a darkmore » photon with adjustable axial and vector couplings, discuss its general features, and show how some UV constraints may be relaxed in a model with nonrenormalizable Yukawa couplings at the expense of fine-tuning. We survey the existing parameter space and the projected reach of planned experiments, brie y commenting on the relevance of the allowed parameter space to low-energy anomalies in π 0 and 8Be* decay.« less

  13. Light weakly coupled axial forces: models, constraints, and projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahn, Yonatan; Krnjaic, Gordan; Mishra-Sharma, Siddharth

    Here, we investigate the landscape of constraints on MeV-GeV scale, hidden U(1) forces with nonzero axial-vector couplings to Standard Model fermions. While the purely vector-coupled dark photon, which may arise from kinetic mixing, is a well-motivated scenario, several MeV-scale anomalies motivate a theory with axial couplings which can be UV-completed consistent with Standard Model gauge invariance. Moreover, existing constraints on dark photons depend on products of various combinations of axial and vector couplings, making it difficult to isolate the e ects of axial couplings for particular flavors of SM fermions. We present a representative renormalizable, UV-complete model of a darkmore » photon with adjustable axial and vector couplings, discuss its general features, and show how some UV constraints may be relaxed in a model with nonrenormalizable Yukawa couplings at the expense of fine-tuning. We survey the existing parameter space and the projected reach of planned experiments, brie y commenting on the relevance of the allowed parameter space to low-energy anomalies in π 0 and 8Be* decay.« less

  14. Direct discriminant locality preserving projection with Hammerstein polynomial expansion.

    PubMed

    Chen, Xi; Zhang, Jiashu; Li, Defang

    2012-12-01

    Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.

  15. Material decomposition in an arbitrary number of dimensions using noise compensating projection

    NASA Astrophysics Data System (ADS)

    O'Donnell, Thomas; Halaweish, Ahmed; Cormode, David; Cheheltani, Rabee; Fayad, Zahi A.; Mani, Venkatesh

    2017-03-01

    Purpose: Multi-energy CT (e.g., dual energy or photon counting) facilitates the identification of certain compounds via data decomposition. However, the standard approach to decomposition (i.e., solving a system of linear equations) fails if - due to noise - a pixel's vector of HU values falls outside the boundary of values describing possible pure or mixed basis materials. Typically, this is addressed by either throwing away those pixels or projecting them onto the closest point on this boundary. However, when acquiring four (or more) energy volumes, the space bounded by three (or more) materials that may be found in the human body (either naturally or through injection) can be quite small. Noise may significantly limit the number of those pixels to be included within. Therefore, projection onto the boundary becomes an important option. But, projection in higher than 3 dimensional space is not possible with standard vector algebra: the cross-product is not defined. Methods: We describe a technique which employs Clifford Algebra to perform projection in an arbitrary number of dimensions. Clifford Algebra describes a manipulation of vectors that incorporates the concepts of addition, subtraction, multiplication, and division. Thereby, vectors may be operated on like scalars forming a true algebra. Results: We tested our approach on a phantom containing inserts of calcium, gadolinium, iodine, gold nanoparticles and mixtures of pairs thereof. Images were acquired on a prototype photon counting CT scanner under a range of threshold combinations. Comparison of the accuracy of different threshold combinations versus ground truth are presented. Conclusions: Material decomposition is possible with three or more materials and four or more energy thresholds using Clifford Algebra projection to mitigate noise.

  16. A geometric approach to problems in birational geometry.

    PubMed

    Chi, Chen-Yu; Yau, Shing-Tung

    2008-12-02

    A classical set of birational invariants of a variety are its spaces of pluricanonical forms and some of their canonically defined subspaces. Each of these vector spaces admits a typical metric structure which is also birationally invariant. These vector spaces so metrized will be referred to as the pseudonormed spaces of the original varieties. A fundamental question is the following: Given two mildly singular projective varieties with some of the first variety's pseudonormed spaces being isometric to the corresponding ones of the second variety's, can one construct a birational map between them that induces these isometries? In this work, a positive answer to this question is given for varieties of general type. This can be thought of as a theorem of Torelli type for birational equivalence.

  17. The next 25 years: Industrialization of space - Rationale for planning

    NASA Technical Reports Server (NTRS)

    Von Puttkamer, J.

    1977-01-01

    A methodology for planning the industralization of space is discussed. The suggested approach combines the extrapolative ('push') approach, in which alternative futures are projected on the basis of past and current trends and tendencies, with the normative ('pull') view, in which an ideal state in the far future is postulated and policies and decisions are directed toward its attainment. Time-reversed vectors of the future are tied to extrapolated, trend-oriented vectors of the quasi-present to identify common plateaus or stepping stones in technological development. Important steps in the industrialization of space to attain the short-range goals of production of space-derived energy, goods and services and the long-range goal of space colonization are discussed.

  18. Subspace-based interference removal methods for a multichannel biomagnetic sensor array.

    PubMed

    Sekihara, Kensuke; Nagarajan, Srikantan S

    2017-10-01

    In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.

  19. Subspace-based interference removal methods for a multichannel biomagnetic sensor array

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Nagarajan, Srikantan S.

    2017-10-01

    Objective. In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. Approach. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.

  20. Realistic Covariance Prediction for the Earth Science Constellation

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.

  1. Some Applications Of Semigroups And Computer Algebra In Discrete Structures

    NASA Astrophysics Data System (ADS)

    Bijev, G.

    2009-11-01

    An algebraic approach to the pseudoinverse generalization problem in Boolean vector spaces is used. A map (p) is defined, which is similar to an orthogonal projection in linear vector spaces. Some other important maps with properties similar to those of the generalized inverses (pseudoinverses) of linear transformations and matrices corresponding to them are also defined and investigated. Let Ax = b be an equation with matrix A and vectors x and b Boolean. Stochastic experiments for solving the equation, which involves the maps defined and use computer algebra methods, have been made. As a result, the Hamming distance between vectors Ax = p(b) and b is equal or close to the least possible. We also share our experience in using computer algebra systems for teaching discrete mathematics and linear algebra and research. Some examples for computations with binary relations using Maple are given.

  2. Sample-space-based feature extraction and class preserving projection for gene expression data.

    PubMed

    Wang, Wenjun

    2013-01-01

    In order to overcome the problems of high computational complexity and serious matrix singularity for feature extraction using Principal Component Analysis (PCA) and Fisher's Linear Discrinimant Analysis (LDA) in high-dimensional data, sample-space-based feature extraction is presented, which transforms the computation procedure of feature extraction from gene space to sample space by representing the optimal transformation vector with the weighted sum of samples. The technique is used in the implementation of PCA, LDA, Class Preserving Projection (CPP) which is a new method for discriminant feature extraction proposed, and the experimental results on gene expression data demonstrate the effectiveness of the method.

  3. Visualization of x-ray computer tomography using computer-generated holography

    NASA Astrophysics Data System (ADS)

    Daibo, Masahiro; Tayama, Norio

    1998-09-01

    The theory converted from x-ray projection data to the hologram directly by combining the computer tomography (CT) with the computer generated hologram (CGH), is proposed. The purpose of this study is to offer the theory for realizing the all- electronic and high-speed seeing through 3D visualization system, which is for the application to medical diagnosis and non- destructive testing. First, the CT is expressed using the pseudo- inverse matrix which is obtained by the singular value decomposition. CGH is expressed in the matrix style. Next, `projection to hologram conversion' (PTHC) matrix is calculated by the multiplication of phase matrix of CGH with pseudo-inverse matrix of the CT. Finally, the projection vector is converted to the hologram vector directly, by multiplication of the PTHC matrix with the projection vector. Incorporating holographic analog computation into CT reconstruction, it becomes possible that the calculation amount is drastically reduced. We demonstrate the CT cross section which is reconstituted by He-Ne laser in the 3D space from the real x-ray projection data acquired by x-ray television equipment, using our direct conversion technique.

  4. Ranked centroid projection: a data visualization approach with self-organizing maps.

    PubMed

    Yen, G G; Wu, Z

    2008-02-01

    The self-organizing map (SOM) is an efficient tool for visualizing high-dimensional data. In this paper, the clustering and visualization capabilities of the SOM, especially in the analysis of textual data, i.e., document collections, are reviewed and further developed. A novel clustering and visualization approach based on the SOM is proposed for the task of text mining. The proposed approach first transforms the document space into a multidimensional vector space by means of document encoding. Afterwards, a growing hierarchical SOM (GHSOM) is trained and used as a baseline structure to automatically produce maps with various levels of detail. Following the GHSOM training, the new projection method, namely the ranked centroid projection (RCP), is applied to project the input vectors to a hierarchy of 2-D output maps. The RCP is used as a data analysis tool as well as a direct interface to the data. In a set of simulations, the proposed approach is applied to an illustrative data set and two real-world scientific document collections to demonstrate its applicability.

  5. Cycle/Cocycle Oblique Projections on Oriented Graphs

    NASA Astrophysics Data System (ADS)

    Polettini, Matteo

    2015-01-01

    It is well known that the edge vector space of an oriented graph can be decomposed in terms of cycles and cocycles (alias cuts, or bonds), and that a basis for the cycle and the cocycle spaces can be generated by adding and removing edges to an arbitrarily chosen spanning tree. In this paper, we show that the edge vector space can also be decomposed in terms of cycles and the generating edges of cocycles (called cochords), or of cocycles and the generating edges of cycles (called chords). From this observation follows a construction in terms of oblique complementary projection operators. We employ this algebraic construction to prove several properties of unweighted Kirchhoff-Symanzik matrices, encoding the mutual superposition between cycles and cocycles. In particular, we prove that dual matrices of planar graphs have the same spectrum (up to multiplicities). We briefly comment on how this construction provides a refined formalization of Kirchhoff's mesh analysis of electrical circuits, which has lately been applied to generic thermodynamic networks.

  6. Beamspace dual signal space projection (bDSSP): a method for selective detection of deep sources in MEG measurements.

    PubMed

    Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K; Cai, Chang; Nagarajan, Srikantan S

    2018-06-01

    Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.

  7. Beamspace dual signal space projection (bDSSP): a method for selective detection of deep sources in MEG measurements

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K.; Cai, Chang; Nagarajan, Srikantan S.

    2018-06-01

    Objective. Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. Approach. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Main results. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. Significance. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.

  8. The application of vector concepts on two skew lines

    NASA Astrophysics Data System (ADS)

    Alghadari, F.; Turmudi; Herman, T.

    2018-01-01

    The purpose of this study is knowing how to apply vector concepts on two skew lines in three-dimensional (3D) coordinate and its utilization. Several mathematical concepts have a related function for the other, but the related between the concept of vector and 3D have not applied in learning classroom. In fact, there are studies show that female students have difficulties in learning of 3D than male. It is because of personal spatial intelligence. The relevance of vector concepts creates both learning achievement and mathematical ability of male and female students enables to be balanced. The distance like on a cube, cuboid, or pyramid whose are drawn on the rectangular coordinates of a point in space. Two coordinate points of the lines can be created a vector. The vector of two skew lines has the shortest distance and the angle. Calculating of the shortest distance is started to create two vectors as a representation of line by vector position concept, next to determining a norm-vector of two vector which was obtained by cross-product, and then to create a vector from two combination of pair-points which was passed by two skew line, the shortest distance is scalar orthogonal projection of norm-vector on a vector which is a combination of pair-points. While calculating the angle are used two vectors as a representation of line to dot-product, and the inverse of cosine is yield. The utilization of its application on mathematics learning and orthographic projection method.

  9. Generalized sidelobe canceller beamforming method for ultrasound imaging.

    PubMed

    Wang, Ping; Li, Na; Luo, Han-Wu; Zhu, Yong-Kun; Cui, Shi-Gang

    2017-03-01

    A modified generalized sidelobe canceller (IGSC) algorithm is proposed to enhance the resolution and robustness against the noise of the traditional generalized sidelobe canceller (GSC) and coherence factor combined method (GSC-CF). In the GSC algorithm, weighting vector is divided into adaptive and non-adaptive parts, while the non-adaptive part does not block all the desired signal. A modified steer vector of the IGSC algorithm is generated by the projection of the non-adaptive vector on the signal space constructed by the covariance matrix of received data. The blocking matrix is generated based on the orthogonal complementary space of the modified steer vector and the weighting vector is updated subsequently. The performance of IGSC was investigated by simulations and experiments. Through simulations, IGSC outperformed GSC-CF in terms of spatial resolution by 0.1 mm regardless there is noise or not, as well as the contrast ratio respect. The proposed IGSC can be further improved by combining with CF. The experimental results also validated the effectiveness of the proposed algorithm with dataset provided by the University of Michigan.

  10. Vector analysis of chemical variation in the lavas of Parícutin volcano, Mexico

    USGS Publications Warehouse

    Miesch, A.T.

    1979-01-01

    Compositional variations in the lavas of Parícutin volcano, Mexico, have been examined by an extended method of Q-mode factor analysis. Each sample composition is treated as a vector projected from an original eight-dimensional space into a vector system of three dimensions. The compositions represented by the vectors after projection are closely similar to the original compositions except for Na2Oand Fe2O3.The vectors in the three-dimensional system cluster about three different planes that represent three stages of compositional change in the Parícutin lavas. Because chemical data on the compositions of the minerals in the lavas are presently lacking, interpretations of the mineral phases that may have been involved in fractional crystallization are based on CIPW norm calculations. Changes during the first stage are attributed largely to the fractional crystallization of plagioclase and olivine. Changes during the second stage can be explained by the separation of plagioclase and pyroxene. Changes during the final stage may have resulted mostly from the assimilation of a granitic material, as previously proposed by R. E. Wilcox.

  11. Orientation Modeling for Amateur Cameras by Matching Image Line Features and Building Vector Data

    NASA Astrophysics Data System (ADS)

    Hung, C. H.; Chang, W. C.; Chen, L. C.

    2016-06-01

    With the popularity of geospatial applications, database updating is getting important due to the environmental changes over time. Imagery provides a lower cost and efficient way to update the database. Three dimensional objects can be measured by space intersection using conjugate image points and orientation parameters of cameras. However, precise orientation parameters of light amateur cameras are not always available due to their costliness and heaviness of precision GPS and IMU. To automatize data updating, the correspondence of object vector data and image may be built to improve the accuracy of direct georeferencing. This study contains four major parts, (1) back-projection of object vector data, (2) extraction of image feature lines, (3) object-image feature line matching, and (4) line-based orientation modeling. In order to construct the correspondence of features between an image and a building model, the building vector features were back-projected onto the image using the initial camera orientation from GPS and IMU. Image line features were extracted from the imagery. Afterwards, the matching procedure was done by assessing the similarity between the extracted image features and the back-projected ones. Then, the fourth part utilized line features in orientation modeling. The line-based orientation modeling was performed by the integration of line parametric equations into collinearity condition equations. The experiment data included images with 0.06 m resolution acquired by Canon EOS Mark 5D II camera on a Microdrones MD4-1000 UAV. Experimental results indicate that 2.1 pixel accuracy may be reached, which is equivalent to 0.12 m in the object space.

  12. VLBI2010 in NASA's Space Geodesy Project

    NASA Technical Reports Server (NTRS)

    Ma, Chopo

    2012-01-01

    In the summer of 20 11 NASA approved the proposal for the Space Geodesy Project (SGP). A major element is developing at the Goddard Geophysical and Astronomical Observatory a prototype of the next generation of integrated stations with co-located VLBI, SLR, GNSS and DORIS instruments as well as a system for monitoring the vector ties. VLBI2010 is a key component of the integrated station. The objectives ofSGP, the role of VLBI20 lOin the context of SGP, near term plans and possible future scenarios will be discussed.

  13. The Grand Tour via Geodesic Interpolation of 2-frames

    NASA Technical Reports Server (NTRS)

    Asimov, Daniel; Buja, Andreas

    1994-01-01

    Grand tours are a class of methods for visualizing multivariate data, or any finite set of points in n-space. The idea is to create an animation of data projections by moving a 2-dimensional projection plane through n-space. The path of planes used in the animation is chosen so that it becomes dense, that is, it comes arbitrarily close to any plane. One of the original inspirations for the grand tour was the experience of trying to comprehend an abstract sculpture in a museum. One tends to walk around the sculpture, viewing it from many different angles. A useful class of grand tours is based on the idea of continuously interpolating an infinite sequence of randomly chosen planes. Visiting randomly (more precisely: uniformly) distributed planes guarantees denseness of the interpolating path. In computer implementations, 2-dimensional orthogonal projections are specified by two 1-dimensional projections which map to the horizontal and vertical screen dimensions, respectively. Hence, a grand tour is specified by a path of pairs of orthonormal projection vectors. This paper describes an interpolation scheme for smoothly connecting two pairs of orthonormal vectors, and thus for constructing interpolating grand tours. The scheme is optimal in the sense that connecting paths are geodesics in a natural Riemannian geometry.

  14. Unwinding the amplituhedron in binary

    NASA Astrophysics Data System (ADS)

    Arkani-Hamed, Nima; Thomas, Hugh; Trnka, Jaroslav

    2018-01-01

    We present new, fundamentally combinatorial and topological characterizations of the amplituhedron. Upon projecting external data through the amplituhedron, the resulting configuration of points has a specified (and maximal) generalized "winding number". Equivalently, the amplituhedron can be fully described in binary: canonical projections of the geometry down to one dimension have a specified (and maximal) number of "sign flips" of the projected data. The locality and unitarity of scattering amplitudes are easily derived as elementary consequences of this binary code. Minimal winding defines a natural "dual" of the amplituhedron. This picture gives us an avatar of the amplituhedron purely in the configuration space of points in vector space (momentum-twistor space in the physics), a new interpretation of the canonical amplituhedron form, and a direct bosonic understanding of the scattering super-amplitude in planar N = 4 SYM as a differential form on the space of physical kinematical data.

  15. Measurement of the topological charge and index of vortex vector optical fields with a space-variant half-wave plate.

    PubMed

    Liu, Gui-Geng; Wang, Ke; Lee, Yun-Han; Wang, Dan; Li, Ping-Ping; Gou, Fangwang; Li, Yongnan; Tu, Chenghou; Wu, Shin-Tson; Wang, Hui-Tian

    2018-02-15

    Vortex vector optical fields (VVOFs) refer to a kind of vector optical field with an azimuth-variant polarization and a helical phase, simultaneously. Such a VVOF is defined by the topological index of the polarization singularity and the topological charge of the phase vortex. We present a simple method to measure the topological charge and index of VVOFs by using a space-variant half-wave plate (SV-HWP). The geometric phase grating of the SV-HWP diffracts a VVOF into ±1 orders with orthogonally left- and right-handed circular polarizations. By inserting a polarizer behind the SV-HWP, the two circular polarization states project into the linear polarization and then interfere with each other to form the interference pattern, which enables the direct measurement of the topological charge and index of VVOFs.

  16. Intruder Polarization.

    DTIC Science & Technology

    1985-12-01

    mentioned project whose objective was to produce 0DIST~i8LT1,vAAILABILITr OF ABSTRACT 21 ABSTRACT SEC R.ITY CLASSiF CATION ~ .~ C.,NCLASSIFIEDI’.,NLMITED M ...wave propagation vector m . Thl-; -’esns that, f we wnnt to perform the tv-o-lirensional inverse 7’,urier ’ransformar i7! (to transform from i-space...nropaqition vector ~,which was used extensively in the -’-i-al npnro)arh, is still retained in the new approach. It is used in nodellin" m . if

  17. Piezoelectrically forced vibrations of electroded doubly rotated quartz plates by state space method

    NASA Technical Reports Server (NTRS)

    Chander, R.

    1990-01-01

    The purpose of this investigation is to develop an analytical method to study the vibration characteristics of piezoelectrically forced quartz plates. The procedure can be summarized as follows. The three dimensional governing equations of piezoelectricity, the constitutive equations and the strain-displacement relationships are used in deriving the final equations. For this purpose, a state vector consisting of stresses and displacements are chosen and the above equations are manipulated to obtain the projection of the derivative of the state vector with respect to the thickness coordinate on to the state vector itself. The solution to the state vector at any plane is then easily obtained in a closed form in terms of the state vector quantities at a reference plane. To simplify the analysis, simple thickness mode and plane strain approximations are used.

  18. All (4,0): Sigma models with (4,0) off-shell supersymmetry

    NASA Astrophysics Data System (ADS)

    Hull, Chris; Lindström, Ulf

    2017-08-01

    Off-shell (4, 0) supermultiplets in 2-dimensions are formulated. These are used to construct sigma models whose target spaces are vector bundles over manifolds that are hyperkähler with torsion. The off-shell supersymmetry implies that the complex structures are simultaneously integrable and allows us to write actions using extended superspace and projective superspace, giving an explicit construction of the target space geometries.

  19. Quantum Bundle Description of Quantum Projective Spaces

    NASA Astrophysics Data System (ADS)

    Ó Buachalla, Réamonn

    2012-12-01

    We realise Heckenberger and Kolb's canonical calculus on quantum projective ( N - 1)-space C q [ C p N-1] as the restriction of a distinguished quotient of the standard bicovariant calculus for the quantum special unitary group C q [ SU N ]. We introduce a calculus on the quantum sphere C q [ S 2 N-1] in the same way. With respect to these choices of calculi, we present C q [ C p N-1] as the base space of two different quantum principal bundles, one with total space C q [ SU N ], and the other with total space C q [ S 2 N-1]. We go on to give C q [ C p N-1] the structure of a quantum framed manifold. More specifically, we describe the module of one-forms of Heckenberger and Kolb's calculus as an associated vector bundle to the principal bundle with total space C q [ SU N ]. Finally, we construct strong connections for both bundles.

  20. Multivariate analysis of light scattering spectra of liquid dairy products

    NASA Astrophysics Data System (ADS)

    Khodasevich, M. A.

    2010-05-01

    Visible light scattering spectra from the surface layer of samples of commercial liquid dairy products are recorded with a colorimeter. The principal component method is used to analyze these spectra. Vectors representing the samples of dairy products in a multidimensional space of spectral counts are projected onto a three-dimensional subspace of principal components. The magnitudes of these projections are found to depend on the type of dairy product.

  1. HI-STAR. Health Improvements Through Space Technologies and Resources: Final Report

    NASA Technical Reports Server (NTRS)

    Finarelli, Margaret G.

    2002-01-01

    The purpose of this document is to describe a global strategy to integrate the use of space technology in the fight against malaria. Given the well-documented relationship between the vector and its environment, and the ability of existing space technologies to monitor environmental factors, malaria is a strong candidate for the application of space technology. The concept of a malaria early warning system has been proposed in the past' and pilot studies have been conducted. The HI-STAR project (Health Improvement through Space Technologies and Resources) seeks to build on this concept and enhance the space elements of the suggested framework. As such, the mission statement for this International Space University design project has been defined as follows: "Our mission is to develop and promote a global strategy to help combat malaria using space technology". A general overview of malaria, aspects of how space technology can be useful, and an outline of the HI-STAR strategy is presented.

  2. Ring polymer dynamics in curved spaces

    NASA Astrophysics Data System (ADS)

    Wolf, S.; Curotto, E.

    2012-07-01

    We formulate an extension of the ring polymer dynamics approach to curved spaces using stereographic projection coordinates. We test the theory by simulating the particle in a ring, {T}^1, mapped by a stereographic projection using three potentials. Two of these are quadratic, and one is a nonconfining sinusoidal model. We propose a new class of algorithms for the integration of the ring polymer Hamilton equations in curved spaces. These are designed to improve the energy conservation of symplectic integrators based on the split operator approach. For manifolds, the position-position autocorrelation function can be formulated in numerous ways. We find that the position-position autocorrelation function computed from configurations in the Euclidean space {R}^2 that contains {T}^1 as a submanifold has the best statistical properties. The agreement with exact results obtained with vector space methods is excellent for all three potentials, for all values of time in the interval simulated, and for a relatively broad range of temperatures.

  3. Analysis of chaos attractors of MCG-recordings.

    PubMed

    Jiang, Shiqin; Yang, Fan; Yi, Panke; Chen, Bo; Luo, Ming; Wang, Lemin

    2006-01-01

    By studying the chaos attractor of cardiac magnetic induction strength B(z) generated by the electrical activity of the heart, we found that its projection in the reconstructed phase space has a similar shape with the map of the total current dipole vector. It is worth noting that the map of the total current dipole vector is computed with MCG recordings measured at 36 locations, whereas the chaos attractor of B(z) is generated by only one cardiac magnetic field recordings on the measured plan. We discuss only two subjects of different ages in this paper.

  4. Cross-entropy embedding of high-dimensional data using the neural gas model.

    PubMed

    Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi

    2005-01-01

    A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).

  5. Direct solution of the H(1s)-H + long-range interaction problem in momentum space

    NASA Astrophysics Data System (ADS)

    Koga, Toshikatsu

    1985-02-01

    Perturbation equations for the H(1s)-H+ long-range interaction are solved directly in momentum space up to the fourth order with respect to the reciprocal of the internuclear distance. As in the hydrogen atom problem, the Fock transformation is used which projects the momentum vector of an electron from the three-dimensional hyperplane onto the four-dimensional hypersphere. Solutions are given as linear combinations of several four-dimensional spherical harmonics. The present results add an example to the momentum-space solution of the nonspherical potential problem.

  6. Betti numbers of graded modules and cohomology of vector bundles

    NASA Astrophysics Data System (ADS)

    Eisenbud, David; Schreyer, Frank-Olaf

    2009-07-01

    In the remarkable paper Graded Betti numbers of Cohen-Macaulay modules and the multiplicity conjecture, Mats Boij and Jonas Soederberg conjectured that the Betti table of a Cohen-Macaulay module over a polynomial ring is a positive linear combination of Betti tables of modules with pure resolutions. We prove a strengthened form of their conjectures. Applications include a proof of the Multiplicity Conjecture of Huneke and Srinivasan and a proof of the convexity of a fan naturally associated to the Young lattice. With the same tools we show that the cohomology table of any vector bundle on projective space is a positive rational linear combination of the cohomology tables of what we call supernatural vector bundles. Using this result we give new bounds on the slope of a vector bundle in terms of its cohomology.

  7. Development of software for the MSFC solar vector magnetograph

    NASA Technical Reports Server (NTRS)

    Kineke, Jack

    1996-01-01

    The Marshall Space Flight Center Solar Vector Magnetograph is a special purpose telescope used to measure the vector magnetic field in active areas on the surface of the sun. This instrument measures the linear and circular polarization intensities (the Stokes vectors Q, U and V) produced by the Zeeman effect on a specific spectral line due to the solar magnetic field from which the longitudinal and transverse components of the magnetic field may be determined. Beginning in 1990 as a Summer Faculty Fellow in project JOVE and continuing under NASA Grant NAG8-1042, the author has been developing computer software to perform these computations, first using a DEC MicroVAX system equipped with a high speed array processor, and more recently using a DEC AXP/OSF system. This summer's work is a continuation of this development.

  8. A hybrid quantum eraser scheme for characterization of free-space and fiber communication channels

    NASA Astrophysics Data System (ADS)

    Nape, Isaac; Kyeremah, Charlotte; Vallés, Adam; Rosales-Guzmán, Carmelo; Buah-Bassuah, Paul K.; Forbes, Andrew

    2018-02-01

    We demonstrate a simple projective measurement based on the quantum eraser concept that can be used to characterize the disturbances of any communication channel. Quantum erasers are commonly implemented as spatially separated path interferometric schemes. Here we exploit the advantages of redefining the which-path information in terms of spatial modes, replacing physical paths with abstract paths of orbital angular momentum (OAM). Remarkably, vector modes (natural modes of free-space and fiber) have a non-separable feature of spin-orbit coupled states, equivalent to the description of two independently marked paths. We explore the effects of fiber perturbations by probing a step-index optical fiber channel with a vector mode, relevant to high-order spatial mode encoding of information for ultra-fast fiber communications.

  9. Using a Smartphone Camera for Nanosatellite Attitude Determination

    NASA Astrophysics Data System (ADS)

    Shimmin, R.

    2014-09-01

    The PhoneSat project at NASA Ames Research Center has repeatedly flown a commercial cellphone in space. As this project continues, additional utility is being extracted from the cell phone hardware to enable more complex missions. The camera in particular shows great potential as an instrument for position and attitude determination, but this requires complex image processing. This paper outlines progress towards that image processing capability. Initial tests on a small collection of sample images have demonstrated the determination of a Moon vector from an image by automatic thresholding and centroiding, allowing the calibration of existing attitude control systems. Work has been undertaken on a further set of sample images towards horizon detection using a variety of techniques including thresholding, edge detection, applying a Hough transform, and circle fitting. Ultimately it is hoped this will allow calculation of an Earth vector for attitude determination and an approximate altitude. A quick discussion of work towards using the camera as a star tracker is then presented, followed by an introduction to further applications of the camera on space missions.

  10. Decentralized Dimensionality Reduction for Distributed Tensor Data Across Sensor Networks.

    PubMed

    Liang, Junli; Yu, Guoyang; Chen, Badong; Zhao, Minghua

    2016-11-01

    This paper develops a novel decentralized dimensionality reduction algorithm for the distributed tensor data across sensor networks. The main contributions of this paper are as follows. First, conventional centralized methods, which utilize entire data to simultaneously determine all the vectors of the projection matrix along each tensor mode, are not suitable for the network environment. Here, we relax the simultaneous processing manner into the one-vector-by-one-vector (OVBOV) manner, i.e., determining the projection vectors (PVs) related to each tensor mode one by one. Second, we prove that in the OVBOV manner each PV can be determined without modifying any tensor data, which simplifies corresponding computations. Third, we cast the decentralized PV determination problem as a set of subproblems with consensus constraints, so that it can be solved in the network environment only by local computations and information communications among neighboring nodes. Fourth, we introduce the null space and transform the PV determination problem with complex orthogonality constraints into an equivalent hidden convex one without any orthogonality constraint, which can be solved by the Lagrange multiplier method. Finally, experimental results are given to show that the proposed algorithm is an effective dimensionality reduction scheme for the distributed tensor data across the sensor networks.

  11. The finite state projection algorithm for the solution of the chemical master equation.

    PubMed

    Munsky, Brian; Khammash, Mustafa

    2006-01-28

    This article introduces the finite state projection (FSP) method for use in the stochastic analysis of chemically reacting systems. One can describe the chemical populations of such systems with probability density vectors that evolve according to a set of linear ordinary differential equations known as the chemical master equation (CME). Unlike Monte Carlo methods such as the stochastic simulation algorithm (SSA) or tau leaping, the FSP directly solves or approximates the solution of the CME. If the CME describes a system that has a finite number of distinct population vectors, the FSP method provides an exact analytical solution. When an infinite or extremely large number of population variations is possible, the state space can be truncated, and the FSP method provides a certificate of accuracy for how closely the truncated space approximation matches the true solution. The proposed FSP algorithm systematically increases the projection space in order to meet prespecified tolerance in the total probability density error. For any system in which a sufficiently accurate FSP exists, the FSP algorithm is shown to converge in a finite number of steps. The FSP is utilized to solve two examples taken from the field of systems biology, and comparisons are made between the FSP, the SSA, and tau leaping algorithms. In both examples, the FSP outperforms the SSA in terms of accuracy as well as computational efficiency. Furthermore, due to very small molecular counts in these particular examples, the FSP also performs far more effectively than tau leaping methods.

  12. Project ISIAH - Experiment on the effects of micro-gravity on hornets' nest building and activity

    NASA Astrophysics Data System (ADS)

    Brull, Lily

    1992-10-01

    An Israel Space Agency Investigation About Hornets (ISIAH) aimed at determining whether hornets are capable of retaining their unique ability of orientation under microgravity conditions is described. The Oriental Hornets used in the experiment are capable of building combs in the direction of the gravitational vector and detecting minute changes in gravitational force. Data obtained may be used to facilitate human adaptation to space conditions as well as rehabilitation after returning to earth.

  13. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory

    NASA Astrophysics Data System (ADS)

    Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.

    2017-09-01

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  14. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory.

    PubMed

    Lee, M; Leiter, K; Eisner, C; Breuer, A; Wang, X

    2017-09-21

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  15. Multiple sensor fault diagnosis for dynamic processes.

    PubMed

    Li, Cheng-Chih; Jeng, Jyh-Cheng

    2010-10-01

    Modern industrial plants are usually large scaled and contain a great amount of sensors. Sensor fault diagnosis is crucial and necessary to process safety and optimal operation. This paper proposes a systematic approach to detect, isolate and identify multiple sensor faults for multivariate dynamic systems. The current work first defines deviation vectors for sensor observations, and further defines and derives the basic sensor fault matrix (BSFM), consisting of the normalized basic fault vectors, by several different methods. By projecting a process deviation vector to the space spanned by BSFM, this research uses a vector with the resulted weights on each direction for multiple sensor fault diagnosis. This study also proposes a novel monitoring index and derives corresponding sensor fault detectability. The study also utilizes that vector to isolate and identify multiple sensor faults, and discusses the isolatability and identifiability. Simulation examples and comparison with two conventional PCA-based contribution plots are presented to demonstrate the effectiveness of the proposed methodology. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Transformation of vector magnetograms and the problems associated with the effects of perspective and the azimuthal ambiguity

    NASA Technical Reports Server (NTRS)

    Gary, G. Allen; Hagyard, M. J.

    1990-01-01

    Off-center vector magnetograms which use all three components of the measured field provide the maximum information content from the photospheric field and can provide the most consistent potential field independent of the viewing angle by defining the normal component of the field. The required transformations of the magnetic field vector and the geometric mapping of the observed field in the image plane into the heliographic plane have been described. Here we discuss the total transformation of specific vector magnetograms to detail the problems and procedures that one should be aware of in analyzing observational magnetograms. The effect of the 180-deg ambiguity of the observed transverse field is considered as well as the effect of curvature of the photosphere. Specific results for active regions AR 2684 (September 23, 1980) and AR 4474 (April 26, 1984) from the Marshall Space Flight Center Vector magnetograph are described which point to the need for the heliographic projection in determining the field structure of an active region.

  17. Sequential projection pursuit for optimised vibration-based damage detection in an experimental wind turbine blade

    NASA Astrophysics Data System (ADS)

    Hoell, Simon; Omenzetter, Piotr

    2018-02-01

    To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.

  18. Space Shuttle Projects

    NASA Image and Video Library

    1988-06-21

    The predominant themes are: a new beginning (sunrise), a safe mission (stylized launch and plume), the building upon the traditional strengths of NASA (the red vector which symbolizes aeronautics on the original NASA insignia), and a remembrance of their seven colleagues who died aboard Challenger (the seven-starred Big Dipper). The patch was designed by artist Stephen R. Hustvedt of Annapolis, MD.

  19. Where Am I? A Change of Basis Project

    ERIC Educational Resources Information Center

    Selby, Christina

    2016-01-01

    Linear algebra students are typically introduced to the problem of how to convert from one coordinate system to another in a very abstract way. Often, two bases for a given vector space are provided, and students are taught how to determine a transition matrix to be used for changing coordinates. If students are successful in memorizing this…

  20. Design, Assembly, Integration, and Testing of a Power Processing Unit for a Cylindrical Hall Thruster, the NORSAT-2 Flatsat, and the Vector Gravimeter for Asteroids Instrument Computer

    NASA Astrophysics Data System (ADS)

    Svatos, Adam Ladislav

    This thesis describes the author's contributions to three separate projects. The bus of the NORSAT-2 satellite was developed by the Space Flight Laboratory (SFL) for the Norwegian Space Centre (NSC) and Space Norway. The author's contributions to the mission were performing unit tests for the components of all the spacecraft subsystems as well as designing and assembling the flatsat from flight spares. Gedex's Vector Gravimeter for Asteroids (VEGA) is an accelerometer for spacecraft. The author's contributions to this payload were modifying the instrument computer board schematic, designing the printed circuit board, developing and applying test software, and performing thermal acceptance testing of two instrument computer boards. The SFL's cylindrical Hall effect thruster combines the cylindrical configuration for a Hall thruster and uses permanent magnets to achieve miniaturization and low power consumption, respectively. The author's contributions were to design, build, and test an engineering model power processing unit.

  1. Optical flow versus retinal flow as sources of information for flight guidance

    NASA Technical Reports Server (NTRS)

    Cutting, James E.

    1991-01-01

    The appropriate description is considered of visual information for flight guidance, optical flow vs. retinal flow. Most descriptions in the psychological literature are based on the optical flow. However, human eyes move and this movement complicates the issues at stake, particularly when movement of the observer is involved. The question addressed is whether an observer, whose eyes register only retinal flow, use information in optical flow. It is suggested that the observer cannot and does not reconstruct the image in optical flow; instead they use retinal flow. Retinal array is defined as the projections of a three space onto a point and beyond to a movable, nearly hemispheric sensing device, like the retina. Optical array is defined as the projection of a three space environment to a point within that space. And flow is defined as global motion as a field of vectors, best placed on a spherical projection surface. Specifically, flow is the mapping of the field of changes in position of corresponding points on objects in three space onto a point, where that point has moved in position.

  2. Reducing Artifacts in TMS-Evoked EEG

    NASA Astrophysics Data System (ADS)

    Fuertes, Juan José; Travieso, Carlos M.; Álvarez, A.; Ferrer, M. A.; Alonso, J. B.

    Transcranial magnetic stimulation induces weak currents within the cranium to activate neuronal firing and its response is recorded using electroencephalography in order to study the brain directly. However, different artifacts contaminate the results. The goal of this study is to process these artifacts and reduce them digitally. Electromagnetic, blink and auditory artifacts are considered, and Signal-Space Projection, Independent Component Analysis and Wiener Filtering methods are used to reduce them. These last two produce a successful solution for electromagnetic artifacts. Regarding the other artifacts, processed with Signal-Space Projection, the method reduces the artifact but modifies the signal as well. Nonetheless, they are modified in an exactly known way and the vector used for the projection is conserved to be taken into account when analyzing the resulting signals. A system which combines the proposed methods would improve the quality of the information presented to physicians.

  3. An alternative subspace approach to EEG dipole source localization

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  4. The Space Geodesy Project and Radio Frequency Interference Characterization and Mitigation

    NASA Technical Reports Server (NTRS)

    Lawrence, Hilliard M.; Beaudoin, C.; Corey, B. E.; Tourain, C. L.; Petrachenko, B.; Dickey, John

    2013-01-01

    The Space Geodesy Project (SGP) development by NASA is an effort to co-locate the four international geodetic techniques Satellite Laser Ranging (SLR) and Lunar Laser Ranging (LLR), Very Long Baseline Interferometry (VLBI), Global Navigation Satellite System (GNSS), and Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) into one tightly referenced campus and coordinated reference frame analysis. The SGP requirement locates these stations within a small area to maintain line-of-sight and frequent automated survey known as the vector tie system. This causes a direct conflict with the new broadband VLBI technique. Broadband means 2-14 GHz, and RFI susceptibility at -80 dBW or higher due to sensitive RF components in the front end of the radio receiver.

  5. Computational Approaches to Image Understanding.

    DTIC Science & Technology

    1981-10-01

    represnting points, edges, surfaces, and volumes to facilitate display. The geometry or perspective and parailcl (or orthographic) projection has...of making the image forming process explicit. This in turn leads to a concern with geometry , such as the properties f the gradient, stereographic, and...dual spaces. Combining geometry and smoothness leads naturally to multi-variate vector analysis, and to differential geometry . For the most part, a

  6. Proper projective symmetry in LRS Bianchi type V spacetimes

    NASA Astrophysics Data System (ADS)

    Shabbir, Ghulam; Mahomed, K. S.; Mahomed, F. M.; Moitsheki, R. J.

    2018-04-01

    In this paper, we investigate proper projective vector fields of locally rotationally symmetric (LRS) Bianchi type V spacetimes using direct integration and algebraic techniques. Despite the non-degeneracy in the Riemann tensor eigenvalues, we classify proper Bianchi type V spacetimes and show that the above spacetimes do not admit proper projective vector fields. Here, in all the cases projective vector fields are Killing vector fields.

  7. GNSS Space-Time Interference Mitigation and Attitude Determination in the Presence of Interference Signals

    PubMed Central

    Daneshmand, Saeed; Jahromi, Ali Jafarnia; Broumandan, Ali; Lachapelle, Gérard

    2015-01-01

    The use of Space-Time Processing (STP) in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its effectiveness for both narrowband and wideband interference suppression. However, the resulting distortion and bias on the cross correlation functions due to space-time filtering is a major limitation of this technique. Employing the steering vector of the GNSS signals in the filter structure can significantly reduce the distortion on cross correlation functions and lead to more accurate pseudorange measurements. This paper proposes a two-stage interference mitigation approach in which the first stage estimates an interference-free subspace before the acquisition and tracking phases and projects all received signals into this subspace. The next stage estimates array attitude parameters based on detecting and employing GNSS signals that are less distorted due to the projection process. Attitude parameters enable the receiver to estimate the steering vector of each satellite signal and use it in the novel distortionless STP filter to significantly reduce distortion and maximize Signal-to-Noise Ratio (SNR). GPS signals were collected using a six-element antenna array under open sky conditions to first calibrate the antenna array. Simulated interfering signals were then added to the digitized samples in software to verify the applicability of the proposed receiver structure and assess its performance for several interference scenarios. PMID:26016909

  8. GNSS space-time interference mitigation and attitude determination in the presence of interference signals.

    PubMed

    Daneshmand, Saeed; Jahromi, Ali Jafarnia; Broumandan, Ali; Lachapelle, Gérard

    2015-05-26

    The use of Space-Time Processing (STP) in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its effectiveness for both narrowband and wideband interference suppression. However, the resulting distortion and bias on the cross correlation functions due to space-time filtering is a major limitation of this technique. Employing the steering vector of the GNSS signals in the filter structure can significantly reduce the distortion on cross correlation functions and lead to more accurate pseudorange measurements. This paper proposes a two-stage interference mitigation approach in which the first stage estimates an interference-free subspace before the acquisition and tracking phases and projects all received signals into this subspace. The next stage estimates array attitude parameters based on detecting and employing GNSS signals that are less distorted due to the projection process. Attitude parameters enable the receiver to estimate the steering vector of each satellite signal and use it in the novel distortionless STP filter to significantly reduce distortion and maximize Signal-to-Noise Ratio (SNR). GPS signals were collected using a six-element antenna array under open sky conditions to first calibrate the antenna array. Simulated interfering signals were then added to the digitized samples in software to verify the applicability of the proposed receiver structure and assess its performance for several interference scenarios.

  9. Fundamental Principles of Classical Mechanics: a Geometrical Perspectives

    NASA Astrophysics Data System (ADS)

    Lam, Kai S.

    2014-07-01

    Classical mechanics is the quantitative study of the laws of motion for oscopic physical systems with mass. The fundamental laws of this subject, known as Newton's Laws of Motion, are expressed in terms of second-order differential equations governing the time evolution of vectors in a so-called configuration space of a system (see Chapter 12). In an elementary setting, these are usually vectors in 3-dimensional Euclidean space, such as position vectors of point particles; but typically they can be vectors in higher dimensional and more abstract spaces. A general knowledge of the mathematical properties of vectors, not only in their most intuitive incarnations as directed arrows in physical space but as elements of abstract linear vector spaces, and those of linear operators (transformations) on vector spaces as well, is then indispensable in laying the groundwork for both the physical and the more advanced mathematical - more precisely topological and geometrical - concepts that will prove to be vital in our subject. In this beginning chapter we will review these properties, and introduce the all-important related notions of dual spaces and tensor products of vector spaces. The notational convention for vectorial and tensorial indices used for the rest of this book (except when otherwise specified) will also be established...

  10. Two-spinor description of massive particles and relativistic spin projection operators

    NASA Astrophysics Data System (ADS)

    Isaev, A. P.; Podoinitsyn, M. A.

    2018-04-01

    On the basis of the Wigner unitary representations of the covering group ISL (2 , C) of the Poincaré group, we obtain spin-tensor wave functions of free massive particles with arbitrary spin. The wave functions automatically satisfy the Dirac-Pauli-Fierz equations. In the framework of the two-spinor formalism we construct spin-vectors of polarizations and obtain conditions that fix the corresponding relativistic spin projection operators (Behrends-Fronsdal projection operators). With the help of these conditions we find explicit expressions for relativistic spin projection operators for integer spins (Behrends-Fronsdal projection operators) and then find relativistic spin projection operators for half integer spins. These projection operators determine the numerators in the propagators of fields of relativistic particles. We deduce generalizations of the Behrends-Fronsdal projection operators for arbitrary space-time dimensions D > 2.

  11. Multiscale vector fields for image pattern recognition

    NASA Technical Reports Server (NTRS)

    Low, Kah-Chan; Coggins, James M.

    1990-01-01

    A uniform processing framework for low-level vision computing in which a bank of spatial filters maps the image intensity structure at each pixel into an abstract feature space is proposed. Some properties of the filters and the feature space are described. Local orientation is measured by a vector sum in the feature space as follows: each filter's preferred orientation along with the strength of the filter's output determine the orientation and the length of a vector in the feature space; the vectors for all filters are summed to yield a resultant vector for a particular pixel and scale. The orientation of the resultant vector indicates the local orientation, and the magnitude of the vector indicates the strength of the local orientation preference. Limitations of the vector sum method are discussed. Investigations show that the processing framework provides a useful, redundant representation of image structure across orientation and scale.

  12. Vector calculus in non-integer dimensional space and its applications to fractal media

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2015-02-01

    We suggest a generalization of vector calculus for the case of non-integer dimensional space. The first and second orders operations such as gradient, divergence, the scalar and vector Laplace operators for non-integer dimensional space are defined. For simplification we consider scalar and vector fields that are independent of angles. We formulate a generalization of vector calculus for rotationally covariant scalar and vector functions. This generalization allows us to describe fractal media and materials in the framework of continuum models with non-integer dimensional space. As examples of application of the suggested calculus, we consider elasticity of fractal materials (fractal hollow ball and fractal cylindrical pipe with pressure inside and outside), steady distribution of heat in fractal media, electric field of fractal charged cylinder. We solve the correspondent equations for non-integer dimensional space models.

  13. Characterization of a Dynamic String Method for the Construction of Transition Pathways in Molecular Reactions

    PubMed Central

    Johnson, Margaret E.; Hummer, Gerhard

    2012-01-01

    We explore the theoretical foundation of different string methods used to find dominant reaction pathways in high-dimensional configuration spaces. Pathways are assessed by the amount of reactive flux they carry and by their orientation relative to the committor function. By examining the effects of transforming between different collective coordinates that span the same underlying space, we unmask artificial coordinate dependences in strings optimized to follow the free energy gradient. In contrast, strings optimized to follow the drift vector produce reaction pathways that are significantly less sensitive to reparameterizations of the collective coordinates. The differences in these paths arise because the drift vector depends on both the free energy gradient and the diffusion tensor of the coarse collective variables. Anisotropy and position dependence of diffusion tensors arise commonly in spaces of coarse variables, whose generally slow dynamics are obtained by nonlinear projections of the strongly coupled atomic motions. We show here that transition paths constructed to account for dynamics by following the drift vector will (to a close approximation) carry the maximum reactive flux both in systems with isotropic position dependent diffusion, and in systems with constant but anisotropic diffusion. We derive a simple method for calculating the committor function along paths that follow the reactive flux. Lastly, we provide guidance for the practical implementation of the dynamic string method. PMID:22616575

  14. Design review of the Brazilian Experimental Solar Telescope

    NASA Astrophysics Data System (ADS)

    Dal Lago, A.; Vieira, L. E. A.; Albuquerque, B.; Castilho, B.; Guarnieri, F. L.; Cardoso, F. R.; Guerrero, G.; Rodríguez, J. M.; Santos, J.; Costa, J. E. R.; Palacios, J.; da Silva, L.; Alves, L. R.; Costa, L. L.; Sampaio, M.; Dias Silveira, M. V.; Domingues, M. O.; Rockenbach, M.; Aquino, M. C. O.; Soares, M. C. R.; Barbosa, M. J.; Mendes, O., Jr.; Jauer, P. R.; Branco, R.; Dallaqua, R.; Stekel, T. R. C.; Pinto, T. S. N.; Menconi, V. E.; Souza, V. M. C. E. S.; Gonzalez, W.; Rigozo, N.

    2015-12-01

    The Brazilian's National Institute for Space Research (INPE), in collaboration with the Engineering School of Lorena/University of São Paulo (EEL/USP), the Federal University of Minas Gerais (UFMG), and the Brazilian's National Laboratory for Astrophysics (LNA), is developing a solar vector magnetograph and visible-light imager to study solar processes through observations of the solar surface magnetic field. The Brazilian Experimental Solar Telescope is designed to obtain full disk magnetic field and line-of-sight velocity observations in the photosphere. Here we discuss the system requirements and the first design review of the instrument. The instrument is composed by a Ritchey-Chrétien telescope with a 500 mm aperture and 4000 mm focal length. LCD polarization modulators will be employed for the polarization analysis and a tuning Fabry-Perot filter for the wavelength scanning near the Fe II 630.25 nm line. Two large field-of-view, high-resolution 5.5 megapixel sCMOS cameras will be employed as sensors. Additionally, we describe the project management and system engineering approaches employed in this project. As the magnetic field anchored at the solar surface produces most of the structures and energetic events in the upper solar atmosphere and significantly influences the heliosphere, the development of this instrument plays an important role in advancing scientific knowledge in this field. In particular, the Brazilian's Space Weather program will benefit most from the development of this technology. We expect that this project will be the starting point to establish a strong research program on Solar Physics in Brazil. Our main aim is to progressively acquire the know-how to build state-of-art solar vector magnetograph and visible-light imagers for space-based platforms.

  15. Integral transformation solution of free-space cylindrical vector beams and prediction of modified Bessel-Gaussian vector beams.

    PubMed

    Li, Chun-Fang

    2007-12-15

    A unified description of free-space cylindrical vector beams is presented that is an integral transformation solution to the vector Helmholtz equation and the transversality condition. In the paraxial condition, this solution not only includes the known J(1) Bessel-Gaussian vector beam and the axisymmetric Laguerre-Gaussian vector beam that were obtained by solving the paraxial wave equations but also predicts two kinds of vector beam, called a modified Bessel-Gaussian vector beam.

  16. Two-dimensional PCA-based human gait identification

    NASA Astrophysics Data System (ADS)

    Chen, Jinyan; Wu, Rongteng

    2012-11-01

    It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.

  17. Projection correlation between two random vectors.

    PubMed

    Zhu, Liping; Xu, Kai; Li, Runze; Zhong, Wei

    2017-12-01

    We propose the use of projection correlation to characterize dependence between two random vectors. Projection correlation has several appealing properties. It equals zero if and only if the two random vectors are independent, it is not sensitive to the dimensions of the two random vectors, it is invariant with respect to the group of orthogonal transformations, and its estimation is free of tuning parameters and does not require moment conditions on the random vectors. We show that the sample estimate of the projection correction is [Formula: see text]-consistent if the two random vectors are independent and root-[Formula: see text]-consistent otherwise. Monte Carlo simulation studies indicate that the projection correlation has higher power than the distance correlation and the ranks of distances in tests of independence, especially when the dimensions are relatively large or the moment conditions required by the distance correlation are violated.

  18. Beaconless Pointing for Deep-Space Optical Communication

    NASA Technical Reports Server (NTRS)

    Swank, Aaron J.; Aretskin-Hariton, Eliot; Le, Dzu K.; Sands, Obed S.; Wroblewski, Adam

    2016-01-01

    Free space optical communication is of interest to NASA as a complement to existing radio frequency communication methods. The potential for an increase in science data return capability over current radio-frequency communications is the primary objective. Deep space optical communication requires laser beam pointing accuracy on the order of a few microradians. The laser beam pointing approach discussed here operates without the aid of a terrestrial uplink beacon. Precision pointing is obtained from an on-board star tracker in combination with inertial rate sensors and an outgoing beam reference vector. The beaconless optical pointing system presented in this work is the current approach for the Integrated Radio and Optical Communication (iROC) project.

  19. A unified approach to χ2 discriminators for searches of gravitational waves from compact binary coalescences

    NASA Astrophysics Data System (ADS)

    Dhurandhar, Sanjeev; Gupta, Anuradha; Gadre, Bhooshan; Bose, Sukanta

    2017-11-01

    We describe a general mathematical framework for χ2 discriminators in the context of the compact binary coalescence (CBC) search. We show that with any χ2 is associated a vector bundle over the signal manifold, that is, the manifold traced out by the signal waveforms in the function space of data segments. The χ2 is then defined as the square of the L2 norm of the data vector projected onto a finite-dimensional subspace (the fibre) of the Hilbert space of data trains and orthogonal to the signal waveform. Any such fibre leads to a χ2 discriminator, and the full vector bundle comprising the subspaces and the base manifold constitute the χ2 discriminator. We show that the χ2 discriminators used so far in the CBC searches correspond to different fibre structures constituting different vector bundles on the same base manifold, namely, the parameter space. Several benefits accrue from this general formulation. It most importantly shows that there are a plethora of χ2's available and further gives useful insights into the vetoing procedure. It indicates procedures to formulate new χ2's that could be more effective in discriminating against commonly occurring glitches in the data. It also shows that no χ2 with a reasonable number of degrees of freedom is foolproof. It could also shed light on understanding why the traditional χ2 works so well. We show how to construct a generic χ2 given an arbitrary set of vectors in the function space of data segments. These vectors could be chosen such that glitches have maximum projection on them. Further, for glitches that can be modeled, we are able to quantify the efficiency of a given χ2 discriminator by a probability. Second, we propose a family of ambiguity χ2 discriminators that is an alternative to the traditional one [B. Allen, Phys. Rev. D 71, 062001 (2005), 10.1103/PhysRevD.71.062001, B. Allen et al., Phys. Rev. D 85, 122006 (2012)., 10.1103/PhysRevD.85.122006]. Any such ambiguity χ2 makes use of the filtered output of the template bank, thus adding negligible cost to the overall search. It is termed so because it makes significant use of the ambiguity function. We first describe the formulation with the help of the Newtonian waveform, apply the ambiguity χ2 to the spinless TaylorF2 waveforms, and test it on simulated data. We show that the ambiguity χ2 essentially gives a clean separation between glitches and signals. We indicate how the ambiguity χ2 can be generalized to detector networks for coherent observations. The effects of mismatch between signal and templates on a χ2 discriminator using general arguments and the geometrical framework are also investigated.

  20. Solution of the determinantal assignment problem using the Grassmann matrices

    NASA Astrophysics Data System (ADS)

    Karcanias, Nicos; Leventides, John

    2016-02-01

    The paper provides a direct solution to the determinantal assignment problem (DAP) which unifies all frequency assignment problems of the linear control theory. The current approach is based on the solvability of the exterior equation ? where ? is an n -dimensional vector space over ? which is an integral part of the solution of DAP. New criteria for existence of solution and their computation based on the properties of structured matrices are referred to as Grassmann matrices. The solvability of this exterior equation is referred to as decomposability of ?, and it is in turn characterised by the set of quadratic Plücker relations (QPRs) describing the Grassmann variety of the corresponding projective space. Alternative new tests for decomposability of the multi-vector ? are given in terms of the rank properties of the Grassmann matrix, ? of the vector ?, which is constructed by the coordinates of ?. It is shown that the exterior equation is solvable (? is decomposable), if and only if ? where ?; the solution space for a decomposable ?, is the space ?. This provides an alternative linear algebra characterisation of the decomposability problem and of the Grassmann variety to that defined by the QPRs. Further properties of the Grassmann matrices are explored by defining the Hodge-Grassmann matrix as the dual of the Grassmann matrix. The connections of the Hodge-Grassmann matrix to the solution of exterior equations are examined, and an alternative new characterisation of decomposability is given in terms of the dimension of its image space. The framework based on the Grassmann matrices provides the means for the development of a new computational method for the solutions of the exact DAP (when such solutions exist), as well as computing approximate solutions, when exact solutions do not exist.

  1. Project Physics Programmed Instruction, Vectors 1.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    This programmed instruction booklet is an interim version of instructional materials being developed by Harvard Project Physics. It is the first in a series of three booklets on vectors and covers the definitions of vectors and scalars, drawing vector quantities to scale, and negative vectors. For others in this series, see SE 015 550 and SE 015…

  2. Project Physics Programmed Instruction, Vectors 2.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    This is the second of a series of three programmed instruction booklets on vectors developed by Harvard Project Physics. It covers adding two or more vectors together, and finding a third vector that could be added to two given vectors to make a sum of zero. For other booklets in this series, see SE 015 549 and SE 015 551. (DT)

  3. Adaptive Bayes classifiers for remotely sensed data

    NASA Technical Reports Server (NTRS)

    Raulston, H. S.; Pace, M. O.; Gonzalez, R. C.

    1975-01-01

    An algorithm is developed for a learning, adaptive, statistical pattern classifier for remotely sensed data. The estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest, and (2) a projection of the parameters in time and space. The results reported are for Gaussian data in which the mean vector of each class may vary with time or position after the classifier is trained.

  4. Project Physics Programmed Instruction, Vectors 3.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    This is the third of a series of three programmed instruction booklets on vectors developed by Harvard Project Physics. Separating vectors into components and obtaining a vector from its components are the topics covered. For other booklets in this series, see SE 015 549 and SE 015 550. (DT)

  5. UVMag: Space UV and visible spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Pertenais, Martin; Neiner, Coralie; Parès, Laurent P.; Petit, Pascal; Snik, Frans; van Harten, Gerard

    2014-07-01

    UVMag is a project of a space mission equipped with a high-resolution spectropolarimeter working in the UV and visible range. This M-size mission will be proposed to ESA at its M4 call. The main goal of UVMag is to measure the magnetic fields, winds and environment of all types of stars to reach a better understanding of stellar formation and evolution and of the impact of stellar environment on the surrounding planets. The groundbreaking combination of UV and visible spectropolarimetric observations will allow the scientists to study the stellar surface and its environment simultaneously. The instrumental challenge for this mission is to design a high-resolution space spectropolarimeter measuring the full- Stokes vector of the observed star in a huge spectral domain from 117 nm to 870 nm. This spectral range is the main difficulty because of the dispersion of the optical elements and of birefringence issues in the FUV. As the instrument will be launched into space, the polarimetric module has to be robust and therefore use if possible only static elements. This article presents the different design possibilities for the polarimeter at this point of the project.

  6. The Vector Space as a Unifying Concept in School Mathematics.

    ERIC Educational Resources Information Center

    Riggle, Timothy Andrew

    The purpose of this study was to show how the concept of vector space can serve as a unifying thread for mathematics programs--elementary school to pre-calculus college level mathematics. Indicated are a number of opportunities to demonstrate how emphasis upon the vector space structure can enhance the organization of the mathematics curriculum.…

  7. Balancing Chemical Reactions With Matrix Methods and Computer Assistance. Applications of Linear Algebra to Chemistry. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 339.

    ERIC Educational Resources Information Center

    Grimaldi, Ralph P.

    This material was developed to provide an application of matrix mathematics in chemistry, and to show the concepts of linear independence and dependence in vector spaces of dimensions greater than three in a concrete setting. The techniques presented are not intended to be considered as replacements for such chemical methods as oxidation-reduction…

  8. Disentangling the Cosmic Web with Lagrangian Submanifold

    NASA Astrophysics Data System (ADS)

    Shandarin, Sergei F.; Medvedev, Mikhail V.

    2016-10-01

    The Cosmic Web is a complicated highly-entangled geometrical object. Remarkably it has formed from practically Gaussian initial conditions, which may be regarded as the simplest departure from exactly uniform universe in purely deterministic mapping. The full complexity of the web is revealed neither in configuration no velocity spaces considered separately. It can be fully appreciated only in six-dimensional (6D) phase space. However, studies of the phase space is complicated by the fact that every projection of it on a three-dimensional (3D) space is multivalued and contained caustics. In addition phase space is not a metric space that complicates studies of geometry. We suggest to use Lagrangian submanifold i.e., x = x(q), where both x and q are 3D vectors instead of the phase space for studies the complexity of cosmic web in cosmological N-body dark matter simulations. Being fully equivalent in dynamical sense to the phase space it has an advantage of being a single valued and also metric space.

  9. Rhotrix Vector Spaces

    ERIC Educational Resources Information Center

    Aminu, Abdulhadi

    2010-01-01

    By rhotrix we understand an object that lies in some way between (n x n)-dimensional matrices and (2n - 1) x (2n - 1)-dimensional matrices. Representation of vectors in rhotrices is different from the representation of vectors in matrices. A number of vector spaces in matrices and their properties are known. On the other hand, little seems to be…

  10. Thyra Abstract Interface Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe A.

    2005-09-01

    Thrya primarily defines a set of abstract C++ class interfaces needed for the development of abstract numerical atgorithms (ANAs) such as iterative linear solvers, transient solvers all the way up to optimization. At the foundation of these interfaces are abstract C++ classes for vectors, vector spaces, linear operators and multi-vectors. Also included in the Thyra package is C++ code for creating concrete vector, vector space, linear operator, and multi-vector subclasses as well as other utilities to aid in the development of ANAs. Currently, very general and efficient concrete subclass implementations exist for serial and SPMD in-core vectors and multi-vectors. Codemore » also currently exists for testing objects and providing composite objects such as product vectors.« less

  11. CORRELATED AND ZONAL ERRORS OF GLOBAL ASTROMETRIC MISSIONS: A SPHERICAL HARMONIC SOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarov, V. V.; Dorland, B. N.; Gaume, R. A.

    We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.

  12. Correlated and Zonal Errors of Global Astrometric Missions: A Spherical Harmonic Solution

    NASA Astrophysics Data System (ADS)

    Makarov, V. V.; Dorland, B. N.; Gaume, R. A.; Hennessy, G. S.; Berghea, C. T.; Dudik, R. P.; Schmitt, H. R.

    2012-07-01

    We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.

  13. New Term Weighting Formulas for the Vector Space Method in Information Retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chisholm, E.; Kolda, T.G.

    The goal in information retrieval is to enable users to automatically and accurately find data relevant to their queries. One possible approach to this problem i use the vector space model, which models documents and queries as vectors in the term space. The components of the vectors are determined by the term weighting scheme, a function of the frequencies of the terms in the document or query as well as throughout the collection. We discuss popular term weighting schemes and present several new schemes that offer improved performance.

  14. Four-body trajectory optimization

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1974-01-01

    A comprehensive optimization program has been developed for computing fuel-optimal trajectories between the earth and a point in the sun-earth-moon system. It presents methods for generating fuel optimal two-impulse trajectories which may originate at the earth or a point in space and fuel optimal three-impulse trajectories between two points in space. The extrapolation of the state vector and the computation of the state transition matrix are accomplished by the Stumpff-Weiss method. The cost and constraint gradients are computed analytically in terms of the terminal state and the state transition matrix. The 4-body Lambert problem is solved by using the Newton-Raphson method. An accelerated gradient projection method is used to optimize a 2-impulse trajectory with terminal constraint. The Davidon's Variance Method is used both in the accelerated gradient projection method and the outer loop of a 3-impulse trajectory optimization problem.

  15. Extended vector-tensor theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Rampei; Naruko, Atsushi; Yoshida, Daisuke, E-mail: rampei@th.phys.titech.ac.jp, E-mail: naruko@th.phys.titech.ac.jp, E-mail: yoshida@th.phys.titech.ac.jp

    Recently, several extensions of massive vector theory in curved space-time have been proposed in many literatures. In this paper, we consider the most general vector-tensor theories that contain up to two derivatives with respect to metric and vector field. By imposing a degeneracy condition of the Lagrangian in the context of ADM decomposition of space-time to eliminate an unwanted mode, we construct a new class of massive vector theories where five degrees of freedom can propagate, corresponding to three for massive vector modes and two for massless tensor modes. We find that the generalized Proca and the beyond generalized Procamore » theories up to the quartic Lagrangian, which should be included in this formulation, are degenerate theories even in curved space-time. Finally, introducing new metric and vector field transformations, we investigate the properties of thus obtained theories under such transformations.« less

  16. On the Chern-Gauss-Bonnet theorem for the noncommutative 4-sphere

    NASA Astrophysics Data System (ADS)

    Arnlind, Joakim; Wilson, Mitsuru

    2017-01-01

    We construct a differential calculus over the noncommutative 4-sphere in the framework of pseudo-Riemannian calculi, and show that for every metric in a conformal class of perturbations of the round metric, there exists a unique metric and torsion-free connection. Furthermore, we find a localization of the projective module corresponding to the space of vector fields, which allows us to formulate a Chern-Gauss-Bonnet type theorem for the noncommutative 4-sphere.

  17. Biquaternion beamspace with its application to vector-sensor array direction findings and polarization estimations

    NASA Astrophysics Data System (ADS)

    Li, Dan; Xu, Feng; Jiang, Jing Fei; Zhang, Jian Qiu

    2017-12-01

    In this paper, a biquaternion beamspace, constructed by projecting the original data of an electromagnetic vector-sensor array into a subspace of a lower dimension via a quaternion transformation matrix, is first proposed. To estimate the direction and polarization angles of sources, biquaternion beamspace multiple signal classification (BB-MUSIC) estimators are then formulated. The analytical results show that the biquaternion beamspaces offer us some additional degrees of freedom to simultaneously achieve three goals. One is to save the memory spaces for storing the data covariance matrix and reduce the computation efforts of the eigen-decomposition. Another is to decouple the estimations of the sources' polarization parameters from those of their direction angles. The other is to blindly whiten the coherent noise of the six constituent antennas in each vector-sensor. It is also shown that the existing biquaternion multiple signal classification (BQ-MUSIC) estimator is a specific case of our BB-MUSIC ones. The simulation results verify the correctness and effectiveness of the analytical ones.

  18. Fast localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates.

    PubMed

    Subotnik, Joseph E; Dutoi, Anthony D; Head-Gordon, Martin

    2005-09-15

    We present here an algorithm for computing stable, well-defined localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates. The algorithm is very fast, limited only by diagonalization of two matrices with dimension the size of the number of virtual orbitals. Furthermore, we require no more than quadratic (in the number of electrons) storage. The basic premise behind our algorithm is that one can decompose any given atomic-orbital (AO) vector space as a minimal basis space (which includes the occupied and valence virtual spaces) and a hard-virtual (HV) space (which includes everything else). The valence virtual space localizes easily with standard methods, while the hard-virtual space is constructed to be atom centered and automatically local. The orbitals presented here may be computed almost as quickly as projecting the AO basis onto the virtual space and are almost as local (according to orbital variance), while our orbitals are orthonormal (rather than redundant and nonorthogonal). We expect this algorithm to find use in local-correlation methods.

  19. Improvement of cardiac CT reconstruction using local motion vector fields.

    PubMed

    Schirra, Carsten Oliver; Bontus, Claas; van Stevendaal, Udo; Dössel, Olaf; Grass, Michael

    2009-03-01

    The motion of the heart is a major challenge for cardiac imaging using CT. A novel approach to decrease motion blur and to improve the signal to noise ratio is motion compensated reconstruction which takes motion vector fields into account in order to correct motion. The presented work deals with the determination of local motion vector fields from high contrast objects and their utilization within motion compensated filtered back projection reconstruction. Image registration is applied during the quiescent cardiac phases. Temporal interpolation in parameter space is used in order to estimate motion during strong motion phases. The resulting motion vector fields are during image reconstruction. The method is assessed using a software phantom and several clinical cases for calcium scoring. As a criterion for reconstruction quality, calcium volume scores were derived from both, gated cardiac reconstruction and motion compensated reconstruction throughout the cardiac phases using low pitch helical cone beam CT acquisitions. The presented technique is a robust method to determine and utilize local motion vector fields. Motion compensated reconstruction using the derived motion vector fields leads to superior image quality compared to gated reconstruction. As a result, the gating window can be enlarged significantly, resulting in increased SNR, while reliable Hounsfield units are achieved due to the reduced level of motion artefacts. The enlargement of the gating window can be translated into reduced dose requirements.

  20. Improved dense trajectories for action recognition based on random projection and Fisher vectors

    NASA Astrophysics Data System (ADS)

    Ai, Shihui; Lu, Tongwei; Xiong, Yudian

    2018-03-01

    As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.

  1. Manifolds for pose tracking from monocular video

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2015-03-01

    We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).

  2. Bundles over nearly-Kahler homogeneous spaces in heterotic string theory

    NASA Astrophysics Data System (ADS)

    Klaput, Michael; Lukas, Andre; Matti, Cyril

    2011-09-01

    We construct heterotic vacua based on six-dimensional nearly-Kahler homogeneous manifolds and non-trivial vector bundles thereon. Our examples are based on three specific group coset spaces. It is shown how to construct line bundles over these spaces, compute their properties and build up vector bundles consistent with supersymmetry and anomaly cancelation. It turns out that the most interesting coset is SU(3)/U(1)2. This space supports a large number of vector bundles which lead to consistent heterotic vacua, some of them with three chiral families.

  3. Instantaneous brain dynamics mapped to a continuous state space.

    PubMed

    Billings, Jacob C W; Medda, Alessio; Shakil, Sadia; Shen, Xiaohong; Kashyap, Amrit; Chen, Shiyang; Abbas, Anzar; Zhang, Xiaodi; Nezafati, Maysam; Pan, Wen-Ju; Berman, Gordon J; Keilholz, Shella D

    2017-11-15

    Measures of whole-brain activity, from techniques such as functional Magnetic Resonance Imaging, provide a means to observe the brain's dynamical operations. However, interpretation of whole-brain dynamics has been stymied by the inherently high-dimensional structure of brain activity. The present research addresses this challenge through a series of scale transformations in the spectral, spatial, and relational domains. Instantaneous multispectral dynamics are first developed from input data via a wavelet filter bank. Voxel-level signals are then projected onto a representative set of spatially independent components. The correlation distance over the instantaneous wavelet-ICA state vectors is a graph that may be embedded onto a lower-dimensional space to assist the interpretation of state-space dynamics. Applying this procedure to a large sample of resting-state and task-active data (acquired through the Human Connectome Project), we segment the empirical state space into a continuum of stimulus-dependent brain states. Upon observing the local neighborhood of brain-states adopted subsequent to each stimulus, we may conclude that resting brain activity includes brain states that are, at times, similar to those adopted during tasks, but that are at other times distinct from task-active brain states. As task-active brain states often populate a local neighborhood, back-projection of segments of the dynamical state space onto the brain's surface reveals the patterns of brain activity that support many experimentally-defined states. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Project Blue: Optical Coronagraphic Imaging Search for Terrestrial-class Exoplanets in Alpha Centauri

    NASA Astrophysics Data System (ADS)

    Morse, Jon; Project Blue team

    2018-01-01

    Project Blue is a coronagraphic imaging space telescope mission designed to search for habitable worlds orbiting the nearest Sun-like stars in the Alpha Centauri system. With a 45-50 cm baseline primary mirror size, Project Blue will perform a reconnaissance of the habitable zones of Alpha Centauri A and B in blue light and one or two longer wavelength bands to determine the hue of any planets discovered. Light passing through the off-axis telescope feeds into a coronagraphic instrument that forms the heart of the mission. Various coronagraph designs are being considered, such as phase induced amplitude apodization (PIAA), vector vortex, etc. Differential orbital image processing techniques will be employed to analyze the data for faint planets embedded in the residual glare of the parent star. Project Blue will advance our knowledge about the presence or absence of terrestrial-class exoplanets in the habitable zones and measure the brightness of zodiacal dust around each star, which will aid future missions in planning their observational surveys of exoplanets. It also provides on-orbit demonstration of high-contrast coronagraphic imaging technologies and techniques that will be useful for planning and implementing future space missions by NASA and other space agencies. We present an overview of the science goals, mission concept and development schedule. As part of our cooperative agreement with NASA, the Project Blue team intends to make the data available in a publicly accessible archive.

  5. River flow prediction using hybrid models of support vector regression with the wavelet transform, singular spectrum analysis and chaotic approach

    NASA Astrophysics Data System (ADS)

    Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal

    2018-06-01

    Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.

  6. Dual Vector Spaces and Physical Singularities

    NASA Astrophysics Data System (ADS)

    Rowlands, Peter

    Though we often refer to 3-D vector space as constructed from points, there is no mechanism from within its definition for doing this. In particular, space, on its own, cannot accommodate the singularities that we call fundamental particles. This requires a commutative combination of space as we know it with another 3-D vector space, which is dual to the first (in a physical sense). The combination of the two spaces generates a nilpotent quantum mechanics/quantum field theory, which incorporates exact supersymmetry and ultimately removes the anomalies due to self-interaction. Among the many natural consequences of the dual space formalism are half-integral spin for fermions, zitterbewegung, Berry phase and a zero norm Berwald-Moor metric for fermionic states.

  7. Are Quantum Models for Order Effects Quantum?

    NASA Astrophysics Data System (ADS)

    Moreira, Catarina; Wichert, Andreas

    2017-12-01

    The application of principles of Quantum Mechanics in areas outside of physics has been getting increasing attention in the scientific community in an emergent disciplined called Quantum Cognition. These principles have been applied to explain paradoxical situations that cannot be easily explained through classical theory. In quantum probability, events are characterised by a superposition state, which is represented by a state vector in a N-dimensional vector space. The probability of an event is given by the squared magnitude of the projection of this superposition state into the desired subspace. This geometric approach is very useful to explain paradoxical findings that involve order effects, but do we really need quantum principles for models that only involve projections? This work has two main goals. First, it is still not clear in the literature if a quantum projection model has any advantage towards a classical projection. We compared both models and concluded that the Quantum Projection model achieves the same results as its classical counterpart, because the quantum interference effects play no role in the computation of the probabilities. Second, it intends to propose an alternative relativistic interpretation for rotation parameters that are involved in both classical and quantum models. In the end, instead of interpreting these parameters as a similarity measure between questions, we propose that they emerge due to the lack of knowledge concerned with a personal basis state and also due to uncertainties towards the state of world and towards the context of the questions.

  8. Self-Organizing-Map Program for Analyzing Multivariate Data

    NASA Technical Reports Server (NTRS)

    Li, P. Peggy; Jacob, Joseph C.; Block, Gary L.; Braverman, Amy J.

    2005-01-01

    SOM_VIS is a computer program for analysis and display of multidimensional sets of Earth-image data typified by the data acquired by the Multi-angle Imaging Spectro-Radiometer [MISR (a spaceborne instrument)]. In SOM_VIS, an enhanced self-organizing-map (SOM) algorithm is first used to project a multidimensional set of data into a nonuniform three-dimensional lattice structure. The lattice structure is mapped to a color space to obtain a color map for an image. The Voronoi cell-refinement algorithm is used to map the SOM lattice structure to various levels of color resolution. The final result is a false-color image in which similar colors represent similar characteristics across all its data dimensions. SOM_VIS provides a control panel for selection of a subset of suitably preprocessed MISR radiance data, and a control panel for choosing parameters to run SOM training. SOM_VIS also includes a component for displaying the false-color SOM image, a color map for the trained SOM lattice, a plot showing an original input vector in 36 dimensions of a selected pixel from the SOM image, the SOM vector that represents the input vector, and the Euclidean distance between the two vectors.

  9. Embedded 3D shape measurement system based on a novel spatio-temporal coding method

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Tian, Jindong; Tian, Yong; Li, Dong

    2016-11-01

    Structured light measurement has been wildly used since 1970s in industrial component detection, reverse engineering, 3D molding, robot navigation, medical and many other fields. In order to satisfy the demand for high speed, high precision and high resolution 3-D measurement for embedded system, a new patterns combining binary and gray coding principle in space are designed and projected onto the object surface orderly. Each pixel corresponds to the designed sequence of gray values in time - domain, which is treated as a feature vector. The unique gray vector is then dimensionally reduced to a scalar which could be used as characteristic information for binocular matching. In this method, the number of projected structured light patterns is reduced, and the time-consuming phase unwrapping in traditional phase shift methods is avoided. This algorithm is eventually implemented on DM3730 embedded system for 3-D measuring, which consists of an ARM and a DSP core and has a strong capability of digital signal processing. Experimental results demonstrated the feasibility of the proposed method.

  10. Effects of OCR Errors on Ranking and Feedback Using the Vector Space Model.

    ERIC Educational Resources Information Center

    Taghva, Kazem; And Others

    1996-01-01

    Reports on the performance of the vector space model in the presence of OCR (optical character recognition) errors in information retrieval. Highlights include precision and recall, a full-text test collection, smart vector representation, impact of weighting parameters, ranking variability, and the effect of relevance feedback. (Author/LRW)

  11. On Fock-space representations of quantized enveloping algebras related to noncommutative differential geometry

    NASA Astrophysics Data System (ADS)

    Jurčo, B.; Schlieker, M.

    1995-07-01

    In this paper explicitly natural (from the geometrical point of view) Fock-space representations (contragradient Verma modules) of the quantized enveloping algebras are constructed. In order to do so, one starts from the Gauss decomposition of the quantum group and introduces the differential operators on the corresponding q-deformed flag manifold (assumed as a left comodule for the quantum group) by a projection to it of the right action of the quantized enveloping algebra on the quantum group. Finally, the representatives of the elements of the quantized enveloping algebra corresponding to the left-invariant vector fields on the quantum group are expressed as first-order differential operators on the q-deformed flag manifold.

  12. A vector space model approach to identify genetically related diseases.

    PubMed

    Sarkar, Indra Neil

    2012-01-01

    The relationship between diseases and their causative genes can be complex, especially in the case of polygenic diseases. Further exacerbating the challenges in their study is that many genes may be causally related to multiple diseases. This study explored the relationship between diseases through the adaptation of an approach pioneered in the context of information retrieval: vector space models. A vector space model approach was developed that bridges gene disease knowledge inferred across three knowledge bases: Online Mendelian Inheritance in Man, GenBank, and Medline. The approach was then used to identify potentially related diseases for two target diseases: Alzheimer disease and Prader-Willi Syndrome. In the case of both Alzheimer Disease and Prader-Willi Syndrome, a set of plausible diseases were identified that may warrant further exploration. This study furthers seminal work by Swanson, et al. that demonstrated the potential for mining literature for putative correlations. Using a vector space modeling approach, information from both biomedical literature and genomic resources (like GenBank) can be combined towards identification of putative correlations of interest. To this end, the relevance of the predicted diseases of interest in this study using the vector space modeling approach were validated based on supporting literature. The results of this study suggest that a vector space model approach may be a useful means to identify potential relationships between complex diseases, and thereby enable the coordination of gene-based findings across multiple complex diseases.

  13. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  14. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE PAGES

    None, None

    2016-11-21

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  15. Evaluation of Aerodynamic Drag and Torque for External Tanks in Low Earth Orbit

    PubMed Central

    Stone, William C.; Witzgall, Christoph

    2006-01-01

    A numerical procedure is described in which the aerodynamic drag and torque in low Earth orbit are calculated for a prototype Space Shuttle external tank and its components, the “LO2” and “LH2” tanks, carrying liquid oxygen and hydrogen, respectively, for any given angle of attack. Calculations assume the hypersonic limit of free molecular flow theory. Each shell of revolution is assumed to be described by a series of parametric equations for their respective contours. It is discretized into circular cross sections perpendicular to the axis of revolution, which yield a series of ellipses when projected according to the given angle of attack. The drag profile, that is, the projection of the entire shell is approximated by the convex envelope of those ellipses. The area of the drag profile, that is, the drag area, and its center of area moment, that is, the drag center, are then calculated and permit determination of the drag vector and the eccentricity vector from the center of gravity of the shell to the drag center. The aerodynamic torque is obtained as the cross product of those vectors. The tanks are assumed to be either evacuated or pressurized with a uniform internal gas distribution: dynamic shifting of the tank center of mass due to residual propellant sloshing is not considered. PMID:27274926

  16. Tracking Avian Reservoirs of Arboviruses using Remote Sensing and Radiotelemetry

    NASA Technical Reports Server (NTRS)

    Beck, L.; Wright, S.; Schmidt, C.; Lobitz, B.; Bell, D.; Brown, D.; Brass, James A. (Technical Monitor)

    2002-01-01

    Encephalitis is caused by a virus that is transmitted by mosquitoes between mammalian hosts. The virus is closely related to the West Nile virus (WNV), which started in New York in 1999, and has since spread to 25 states. Like encephalitis, WNV is vectored by mosquitoes, and the primary hosts are birds; humans are accidental, or'dead-end' hosts. Very little is understood about the behavior of these bird populations, and how they intersect - both in time and in space - with mosquito populations. Exploring these relationships is the first step in developing models for encephalitis and WNV transmission risk. This project combines remotely sensed data with radiotelemetry to create a spatiotemporal map of encephalitis viral activity in bird and mosquito populations in the Sacramento Valley of California. Specifically, remote sensing (RS) and geographic information system (GIS) technologies were used to characterize habitats utilized by both avian viral reservoirs and the mosquitoes that vector encephalitis. Radiotelemetry and serosurveys (blood) were then used to spatially and temporally track the patterns of infection. The project uses Landsat ETM+ multitemporal satellite data to characterize habitats utilized by both birds and the mosquito vectors. Mist nets were used to sample members of individual flocks of blackbirds and cowbirds over a period of several months; these birds were then bled to assess their viral status, banded, and fitted with transmitters. Radiotelemetry was used to spatially and temporally track the distribution of banded birds and their associated flocks. The movement of these indicator flocks were compared with the location of remotely sensed (adult and larval) mosquito habitats to determine the intersection of bird's and vectors; this is key in understanding where and when transmission occurs from bird to bird, as well as from bird to mammal, via mosquito. The relationships found during the project are being used to generate a model of encephalitis transmission risk in California.

  17. All ASD complex and real 4-dimensional Einstein spaces with Λ≠0 admitting a nonnull Killing vector

    NASA Astrophysics Data System (ADS)

    Chudecki, Adam

    2016-12-01

    Anti-self-dual (ASD) 4-dimensional complex Einstein spaces with nonzero cosmological constant Λ equipped with a nonnull Killing vector are considered. It is shown that any conformally nonflat metric of such spaces can be always brought to a special form and the Einstein field equations can be reduced to the Boyer-Finley-Plebański equation (Toda field equation). Some alternative forms of the metric are discussed. All possible real slices (neutral, Euclidean and Lorentzian) of ASD complex Einstein spaces with Λ≠0 admitting a nonnull Killing vector are found.

  18. Learning with LOGO: Logo and Vectors.

    ERIC Educational Resources Information Center

    Lough, Tom; Tipps, Steve

    1986-01-01

    This is the first of a two-part series on the general concept of vector space. Provides tool procedures to allow investigation of vector properties, vector addition and subtraction, and X and Y components. Lists several sources of additional vector ideas. (JM)

  19. Principal fiber bundle description of number scaling for scalars and vectors: application to gauge theory

    NASA Astrophysics Data System (ADS)

    Benioff, Paul

    2015-05-01

    The purpose of this paper is to put the description of number scaling and its effects on physics and geometry on a firmer foundation, and to make it more understandable. A main point is that two different concepts, number and number value are combined in the usual representations of number structures. This is valid as long as just one structure of each number type is being considered. It is not valid when different structures of each number type are being considered. Elements of base sets of number structures, considered by themselves, have no meaning. They acquire meaning or value as elements of a number structure. Fiber bundles over a space or space time manifold, M, are described. The fiber consists of a collection of many real or complex number structures and vector space structures. The structures are parameterized by a real or complex scaling factor, s. A vector space at a fiber level, s, has, as scalars, real or complex number structures at the same level. Connections are described that relate scalar and vector space structures at both neighbor M locations and at neighbor scaling levels. Scalar and vector structure valued fields are described and covariant derivatives of these fields are obtained. Two complex vector fields, each with one real and one imaginary field, appear, with one complex field associated with positions in M and the other with position dependent scaling factors. A derivation of the covariant derivative for scalar and vector valued fields gives the same vector fields. The derivation shows that the complex vector field associated with scaling fiber levels is the gradient of a complex scalar field. Use of these results in gauge theory shows that the imaginary part of the vector field associated with M positions acts like the electromagnetic field. The physical relevance of the other three fields, if any, is not known.

  20. Managing the resilience space of the German energy system - A vector analysis.

    PubMed

    Schlör, Holger; Venghaus, Sandra; Märker, Carolin; Hake, Jürgen-Friedrich

    2018-07-15

    The UN Sustainable Development Goals formulated in 2016 confirmed the sustainability concept of the Earth Summit of 1992 and supported UNEP's green economy transition concept. The transformation of the energy system (Energiewende) is the keystone of Germany's sustainability strategy and of the German green economy concept. We use ten updated energy-related indicators of the German sustainability strategy to analyse the German energy system. The development of the sustainable indicators is examined in the monitoring process by a vector analysis performed in two-dimensional Euclidean space (Euclidean plane). The aim of the novel vector analysis is to measure the current status of the Energiewende in Germany and thereby provide decision makers with information about the strains for the specific remaining pathway of the single indicators and of the total system in order to meet the sustainability targets of the Energiewende. Within this vector model, three vectors (the normative sustainable development vector, the real development vector, and the green economy vector) define the resilience space of our analysis. The resilience space encloses a number of vectors representing different pathways with different technological and socio-economic strains to achieve a sustainable development of the green economy. In this space, the decision will be made as to whether the government measures will lead to a resilient energy system or whether a readjustment of indicator targets or political measures is necessary. The vector analysis enables us to analyse both the government's ambitiousness, which is expressed in the sustainability target for the indicators at the start of the sustainability strategy representing the starting preference order of the German government (SPO) and, secondly, the current preference order of German society in order to bridge the remaining distance to reach the specific sustainability goals of the strategy summarized in the current preference order (CPO). Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision.

    PubMed

    Tu, Junchao; Zhang, Liyan

    2018-01-12

    A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.

  2. Evaluation of the impacts of climate change on disease vectors through ecological niche modelling.

    PubMed

    Carvalho, B M; Rangel, E F; Vale, M M

    2017-08-01

    Vector-borne diseases are exceptionally sensitive to climate change. Predicting vector occurrence in specific regions is a challenge that disease control programs must meet in order to plan and execute control interventions and climate change adaptation measures. Recently, an increasing number of scientific articles have applied ecological niche modelling (ENM) to study medically important insects and ticks. With a myriad of available methods, it is challenging to interpret their results. Here we review the future projections of disease vectors produced by ENM, and assess their trends and limitations. Tropical regions are currently occupied by many vector species; but future projections indicate poleward expansions of suitable climates for their occurrence and, therefore, entomological surveillance must be continuously done in areas projected to become suitable. The most commonly applied methods were the maximum entropy algorithm, generalized linear models, the genetic algorithm for rule set prediction, and discriminant analysis. Lack of consideration of the full-known current distribution of the target species on models with future projections has led to questionable predictions. We conclude that there is no ideal 'gold standard' method to model vector distributions; researchers are encouraged to test different methods for the same data. Such practice is becoming common in the field of ENM, but still lags behind in studies of disease vectors.

  3. Families of vector-like deformations of relativistic quantum phase spaces, twists and symmetries

    NASA Astrophysics Data System (ADS)

    Meljanac, Daniel; Meljanac, Stjepan; Pikutić, Danijel

    2017-12-01

    Families of vector-like deformed relativistic quantum phase spaces and corresponding realizations are analyzed. A method for a general construction of the star product is presented. The corresponding twist, expressed in terms of phase space coordinates, in the Hopf algebroid sense is presented. General linear realizations are considered and corresponding twists, in terms of momenta and Poincaré-Weyl generators or gl(n) generators are constructed and R-matrix is discussed. A classification of linear realizations leading to vector-like deformed phase spaces is given. There are three types of spaces: (i) commutative spaces, (ii) κ -Minkowski spaces and (iii) κ -Snyder spaces. The corresponding star products are (i) associative and commutative (but non-local), (ii) associative and non-commutative and (iii) non-associative and non-commutative, respectively. Twisted symmetry algebras are considered. Transposed twists and left-right dual algebras are presented. Finally, some physical applications are discussed.

  4. Characterising dark matter searches at colliders and direct detection experiments: Vector mediators

    DOE PAGES

    Buchmueller, Oliver; Dolan, Matthew J.; Malik, Sarah A.; ...

    2015-01-09

    We introduce a Minimal Simplified Dark Matter (MSDM) framework to quantitatively characterise dark matter (DM) searches at the LHC. We study two MSDM models where the DM is a Dirac fermion which interacts with a vector and axial-vector mediator. The models are characterised by four parameters: m DM, M med , g DM and g q, the DM and mediator masses, and the mediator couplings to DM and quarks respectively. The MSDM models accurately capture the full event kinematics, and the dependence on all masses and couplings can be systematically studied. The interpretation of mono-jet searches in this framework canmore » be used to establish an equal-footing comparison with direct detection experiments. For theories with a vector mediator, LHC mono-jet searches possess better sensitivity than direct detection searches for light DM masses (≲5 GeV). For axial-vector mediators, LHC and direct detection searches generally probe orthogonal directions in the parameter space. We explore the projected limits of these searches from the ultimate reach of the LHC and multi-ton xenon direct detection experiments, and find that the complementarity of the searches remains. In conclusion, we provide a comparison of limits in the MSDM and effective field theory (EFT) frameworks to highlight the deficiencies of the EFT framework, particularly when exploring the complementarity of mono-jet and direct detection searches.« less

  5. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  6. Energy Dissipation of Rayleigh Waves due to Absorption Along the Path by the Use of Finite Element Method

    DTIC Science & Technology

    1979-07-31

    3 x 3 t Strain vector a ij,j Space derivative of the stress tensor Fi Force vector per unit volume o Density x CHAPTER III F Total force K Stiffness...matrix 6Vector displacements M Mass matrix B Space operating matrix DO Matrix moduli 2 x 3 DZ Operating matrix in Z direction N Matrix of shape...dissipating medium the deformation of a solid is a function of time, temperature and space . Creep phenomenon is a deformation process in which there is

  7. The Sequential Implementation of Array Processors when there is Directional Uncertainty

    DTIC Science & Technology

    1975-08-01

    University of Washington kindly supplied office space and ccputing facilities. -The author hat, benefited greatly from discussions with several other...if i Q- inverse of Q I L general observation space R general vector of observation _KR general observation vector of dimension K Exiv] "Tf -- ’ -"-T’T...7" i ’i ’:"’ - ’ ; ’ ’ ’ ’ ’ ’" ’"- Glossary of Symbols (continued) R. ith observation 1 Rm real vector space of dimension m R(T) autocorrelation

  8. A Spatio-temporal Model of African Animal Trypanosomosis Risk

    PubMed Central

    Dicko, Ahmadou H.; Percoma, Lassane; Sow, Adama; Adam, Yahaya; Mahama, Charles; Sidibé, Issa; Dayo, Guiguigbaza-Kossigan; Thévenon, Sophie; Fonta, William; Sanfo, Safietou; Djiteye, Aligui; Salou, Ernest; Djohan, Vincent; Cecchi, Giuliano; Bouyer, Jérémy

    2015-01-01

    Background African animal trypanosomosis (AAT) is a major constraint to sustainable development of cattle farming in sub-Saharan Africa. The habitat of the tsetse fly vector is increasingly fragmented owing to demographic pressure and shifts in climate, which leads to heterogeneous risk of cyclical transmission both in space and time. In Burkina Faso and Ghana, the most important vectors are riverine species, namely Glossina palpalis gambiensis and G. tachinoides, which are more resilient to human-induced changes than the savannah and forest species. Although many authors studied the distribution of AAT risk both in space and time, spatio-temporal models allowing predictions of it are lacking. Methodology/Principal Findings We used datasets generated by various projects, including two baseline surveys conducted in Burkina Faso and Ghana within PATTEC (Pan African Tsetse and Trypanosomosis Eradication Campaign) national initiatives. We computed the entomological inoculation rate (EIR) or tsetse challenge using a range of environmental data. The tsetse apparent density and their infection rate were separately estimated and subsequently combined to derive the EIR using a “one layer-one model” approach. The estimated EIR was then projected into suitable habitat. This risk index was finally validated against data on bovine trypanosomosis. It allowed a good prediction of the parasitological status (r2 = 67%), showed a positive correlation but less predictive power with serological status (r2 = 22%) aggregated at the village level but was not related to the illness status (r2 = 2%). Conclusions/Significance The presented spatio-temporal model provides a fine-scale picture of the dynamics of AAT risk in sub-humid areas of West Africa. The estimated EIR was high in the proximity of rivers during the dry season and more widespread during the rainy season. The present analysis is a first step in a broader framework for an efficient risk management of climate-sensitive vector-borne diseases. PMID:26154506

  9. Breathing motion compensated reconstruction for C-arm cone beam CT imaging: initial experience based on animal data

    NASA Astrophysics Data System (ADS)

    Schäfer, D.; Lin, M.; Rao, P. P.; Loffroy, R.; Liapi, E.; Noordhoek, N.; Eshuis, P.; Radaelli, A.; Grass, M.; Geschwind, J.-F. H.

    2012-03-01

    C-arm based tomographic 3D imaging is applied in an increasing number of minimal invasive procedures. Due to the limited acquisition speed for a complete projection data set required for tomographic reconstruction, breathing motion is a potential source of artifacts. This is the case for patients who cannot comply breathing commands (e.g. due to anesthesia). Intra-scan motion estimation and compensation is required. Here, a scheme for projection based local breathing motion estimation is combined with an anatomy adapted interpolation strategy and subsequent motion compensated filtered back projection. The breathing motion vector is measured as a displacement vector on the projections of a tomographic short scan acquisition using the diaphragm as a landmark. Scaling of the displacement to the acquisition iso-center and anatomy adapted volumetric motion vector field interpolation delivers a 3D motion vector per voxel. Motion compensated filtered back projection incorporates this motion vector field in the image reconstruction process. This approach is applied in animal experiments on a flat panel C-arm system delivering improved image quality (lower artifact levels, improved tumor delineation) in 3D liver tumor imaging.

  10. Inertial upper stage - Upgrading a stopgap proves difficult

    NASA Astrophysics Data System (ADS)

    Geddes, J. P.

    The technological and project management difficulties associated with the Inertial Upper Stage's (IUS) development and performance to date are assessed, with a view to future prospects for this system. The IUS was designed for use both on the interim Titan 34D booster and the Space Shuttle Orbiter. The IUS malfunctions and cost overruns reported are substantially due to the system's reliance on novel propulsion and avionics technology. Its two solid rocket motors, which were selected on the basis of their inherent safety for use on the Space Shuttle, have the longest burn time extant. A three-dimensional carbon/carbon nozzle throat had to be developed to sustain this long burn, as were lightweight composite wound cases and shirts, insulation, igniters, and electromechanical thrust vector control.

  11. Capabilities of software "Vector-M" for a diagnostics of the ionosphere state from auroral emissions images and plasma characteristics from the different orbits as a part of the system of control of space weather

    NASA Astrophysics Data System (ADS)

    Avdyushev, V.; Banshchikova, M.; Chuvashov, I.; Kuzmin, A.

    2017-09-01

    In the paper are presented capabilities of software "Vector-M" for a diagnostics of the ionosphere state from auroral emissions images and plasma characteristics from the different orbits as a part of the system of control of space weather. The software "Vector-M" is developed by the celestial mechanics and astrometry department of Tomsk State University in collaboration with Space Research Institute (Moscow) and Central Aerological Observatory of Russian Federal Service for Hydrometeorology and Environmental Monitoring. The software "Vector-M" is intended for calculation of attendant geophysical and astronomical information for the centre of mass of the spacecraft and the space of observations in the experiment with auroral imager Aurovisor-VIS/MP in the orbit of the perspective Meteor-MP spacecraft.

  12. NUDTSNA at TREC 2015 Microblog Track: A Live Retrieval System Framework for Social Network based on Semantic Expansion and Quality Model

    DTIC Science & Technology

    2015-11-20

    between tweets and profiles as follow, • TFIDF Score, which calculates the cosine similarity between a tweet and a profile in vector space model with...TFIDF weight of terms. Vector space model is a model which represents a document as a vector. Tweets and profiles can be expressed as vectors, ~ T = (t...gain(Tr i ) (13) where Tr is the returned tweet sets, gain() is the score func- tion for a tweet. Not interesting, spam/ junk tweets receive a gain of 0

  13. Systems of conservation laws with third-order Hamiltonian structures

    NASA Astrophysics Data System (ADS)

    Ferapontov, Evgeny V.; Pavlov, Maxim V.; Vitolo, Raffaele F.

    2018-06-01

    We investigate n-component systems of conservation laws that possess third-order Hamiltonian structures of differential-geometric type. The classification of such systems is reduced to the projective classification of linear congruences of lines in P^{n+2} satisfying additional geometric constraints. Algebraically, the problem can be reformulated as follows: for a vector space W of dimension n+2, classify n-tuples of skew-symmetric 2-forms A^{α } \\in Λ^2(W) such that φ _{β γ }A^{β }\\wedge A^{γ }=0, for some non-degenerate symmetric φ.

  14. Trends in space activities in 2014: The significance of the space activities of governments

    NASA Astrophysics Data System (ADS)

    Paikowsky, Deganit; Baram, Gil; Ben-Israel, Isaac

    2016-01-01

    This article addresses the principal events of 2014 in the field of space activities, and extrapolates from them the primary trends that can be identified in governmental space activities. In 2014, global space activities centered on two vectors. The first was geopolitical, and the second relates to the matrix between increasing commercial space activities and traditional governmental space activities. In light of these two vectors, the article outlines and analyzes trends of space exploration, human spaceflights, industry and technology, cooperation versus self-reliance, and space security and sustainability. It also reviews the space activities of the leading space-faring nations.

  15. A global SOLIS vector spectromagnetograph (VSM) network

    NASA Astrophysics Data System (ADS)

    Streander, K. V.; Giampapa, M. S.; Harvey, J. W.; Henney, C. J.; Norton, A. A.

    2008-07-01

    Understanding the Sun's magnetic field related activity is far from complete as reflected in the limited ability to make accurate predictions of solar variability. To advance our understanding of solar magnetism, the National Solar Observatory (NSO) constructed the Synoptic Optical Long-term Investigations of the Sun (SOLIS) suite of instruments to conduct high precision optical measurements of processes on the Sun whose study requires sustained observations over long time periods. The Vector Spectromagnetograph (VSM), the principal SOLIS instrument, has been in operation since 2003 and obtains photospheric vector data, as well as photospheric and chromospheric longitudinal magnetic field measurements. Instrument performance is being enhanced by employing new, high-speed cameras that virtually freeze seeing, thus improving sensitivity to measure the solar magnetic field configuration. A major operational goal is to provide real-time and near-real-time data for forecasting space weather and increase scientific yield from shorter duration solar space missions and ground-based research projects. The National Solar Observatory proposes to build two near-duplicates of the VSM instrument and place them at international sites to form a three-site global VSM network. Current electronic industry practice of short lifetime cycles leads to improved performance and reduced acquisition costs but also to redesign costs and engineering impacts that must be minimized. The current VSM instrument status and experience gained from working on the original instrument is presented herein and used to demonstrate that one can dramatically reduce the estimated cost and fabrication time required to duplicate and commission two additional instruments.

  16. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  17. The SAMEX Vector Magnetograph: A Design Study for a Space-Based Solar Vector Magnetograph

    NASA Technical Reports Server (NTRS)

    Hagyard, M. J.; Gary, G. A.; West, E. A.

    1988-01-01

    This report presents the results of a pre-phase A study performed by the Marshall Space Flight Center (MSFC) for the Air Force Geophysics Laboratory (AFGL) to develop a design concept for a space-based solar vector magnetograph and hydrogen-alpha telescope. These are two of the core instruments for a proposed Air Force mission, the Solar Activities Measurement Experiments (SAMEX). This mission is designed to study the processes which give rise to activity in the solar atmosphere and to develop techniques for predicting solar activity and its effects on the terrestrial environment.

  18. Vectoring of parallel synthetic jets: A parametric study

    NASA Astrophysics Data System (ADS)

    Berk, Tim; Gomit, Guillaume; Ganapathisubramani, Bharathram

    2016-11-01

    The vectoring of a pair of parallel synthetic jets can be described using five dimensionless parameters: the aspect ratio of the slots, the Strouhal number, the Reynolds number, the phase difference between the jets and the spacing between the slots. In the present study, the influence of the latter four on the vectoring behaviour of the jets is examined experimentally using particle image velocimetry. Time-averaged velocity maps are used to study the variations in vectoring behaviour for a parametric sweep of each of the four parameters independently. A topological map is constructed for the full four-dimensional parameter space. The vectoring behaviour is described both qualitatively and quantitatively. A vectoring mechanism is proposed, based on measured vortex positions. We acknowledge the financial support from the European Research Council (ERC Grant Agreement No. 277472).

  19. Selection of optimal complexity for ENSO-EMR model by minimum description length principle

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.

    2012-12-01

    One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression modeling of nonlinear processes: Derivation and applications to climatic variability. J. Climate, 18 (21): 4404-4424. D. Kondrashov, S. Kravtsov, A. W. Robertson and M. Ghil, 2005. A hierarchy of data-based ENSO models. J. Climate, 18, 4425-4444.

  20. Anisotropic fractal media by vector calculus in non-integer dimensional space

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2014-08-01

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.

  1. Combined-probability space and certainty or uncertainty relations for a finite-level quantum system

    NASA Astrophysics Data System (ADS)

    Sehrawat, Arun

    2017-08-01

    The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.

  2. AAV vector-mediated secretion of chondroitinase provides a sensitive tracer for axonal arborisations.

    PubMed

    Alves, João Nuno; Muir, Elizabeth M; Andrews, Melissa R; Ward, Anneliese; Michelmore, Nicholas; Dasgupta, Debayan; Verhaagen, Joost; Moloney, Elizabeth B; Keynes, Roger J; Fawcett, James W; Rogers, John H

    2014-04-30

    As part of a project to express chondroitinase ABC (ChABC) in neurons of the central nervous system, we have inserted a modified ChABC gene into an adeno-associated viral (AAV) vector and injected it into the vibrissal motor cortex in adult rats to determine the extent and distribution of expression of the enzyme. A similar vector for expression of green fluorescent protein (GFP) was injected into the same location. For each vector, two versions with minor differences were used, giving similar results. After 4 weeks, the brains were stained to show GFP and products of chondroitinase digestion. Chondroitinase was widely expressed, and the AAV-ChABC and AAV-GFP vectors gave similar expression patterns in many respects, consistent with the known projections from the directly transduced neurons in vibrissal motor cortex and adjacent cingulate cortex. In addition, diffusion of vector to deeper neuronal populations led to labelling of remote projection fields which was much more extensive with AAV-ChABC than with AAV-GFP. The most notable of these populations are inferred to be neurons of cortical layer 6, projecting widely in the thalamus, and neurons of the anterior pole of the hippocampus, projecting through most of the hippocampus. We conclude that, whereas GFP does not label the thinnest axonal branches of some neuronal types, chondroitinase is efficiently secreted from these arborisations and enables their extent to be sensitively visualised. After 12 weeks, chondroitinase expression was undiminished. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  4. O Electromagnetic Power Waves and Power Density Components.

    NASA Astrophysics Data System (ADS)

    Petzold, Donald Wayne

    1980-12-01

    On January 10, 1884 Lord Rayleigh presented a paper entitled "On the Transfer of Energy in the Electromagnetic Field" to the Royal Society of London. This paper had been authored by the late Fellow of Trinity College, Cambridge, Professor J. H. Poynting and in it he claimed that there was a general law for the transfer of electromagnetic energy. He argued that associated with each point in space is a quantity, that has since been called the Poynting vector, that is a measure of the rate of energy flow per unit area. His analysis was concerned with the integration of this power density vector at all points over an enclosing surface of a specific volume. The interpretation of this Poynting vector as a true measure of the local power density was viewed with great skepticism unless the vector was integrated over a closed surface, as the development of the concept required. However, within the last decade or so Shadowitz indicates that a number of prominent authors have argued that the criticism of the interpretation of Poynting's vector as a local power density vector is unjustified. The present paper is not concerned with these arguments but instead is concerned with a decomposition of Poynting's power density vector into two and only two components: one vector which has the same direction as Poynting's vector and which is called the forward power density vector, and another vector, directed opposite to the Poynting vector and called the reverse power density vector. These new local forward and reverse power density vectors will be shown to be dependent upon forward and reverse power wave vectors and these vectors in turn will be related to newly defined forward and reverse components of the electric and magnetic fields. The sum of these forward and reverse power density vectors, which is simply the original Poynting vector, is associated with the total electromagnetic energy traveling past the local point. Another vector which is the difference between the forward and reverse power density vectors and which will be shown to be associated with the total electric and magnetic field energy densities existing at a local point will also be introduced. These local forward and reverse power density vectors may be integrated over a surface to determine the forward and reverse powers and from these results problems related to maximum power transfer or efficiency of electromagnetic energy transmission in space may be studied in a manner similar to that presently being done with transmission lines, wave guides, and more recently with two port multiport lumped parameter systems. These new forward and reverse power density vectors at a point in space are analogous to the forward and revoltages or currents and power waves as used with the transmission line, waveguide, or port. These power wave vectors in space are a generalization of the power waves as developed by Penfield, Youla, and Kurokawa and used with the scattering parameters associated with transmission lines, waveguides and ports.

  5. Information Theoretic Characterization of Physical Theories with Projective State Space

    NASA Astrophysics Data System (ADS)

    Zaopo, Marco

    2015-08-01

    Probabilistic theories are a natural framework to investigate the foundations of quantum theory and possible alternative or deeper theories. In a generic probabilistic theory, states of a physical system are represented as vectors of outcomes probabilities and state spaces are convex cones. In this picture the physics of a given theory is related to the geometric shape of the cone of states. In quantum theory, for instance, the shape of the cone of states corresponds to a projective space over complex numbers. In this paper we investigate geometric constraints on the state space of a generic theory imposed by the following information theoretic requirements: every non completely mixed state of a system is perfectly distinguishable from some other state in a single shot measurement; information capacity of physical systems is conserved under making mixtures of states. These assumptions guarantee that a generic physical system satisfies a natural principle asserting that the more a state of the system is mixed the less information can be stored in the system using that state as logical value. We show that all theories satisfying the above assumptions are such that the shape of their cones of states is that of a projective space over a generic field of numbers. Remarkably, these theories constitute generalizations of quantum theory where superposition principle holds with coefficients pertaining to a generic field of numbers in place of complex numbers. If the field of numbers is trivial and contains only one element we obtain classical theory. This result tells that superposition principle is quite common among probabilistic theories while its absence gives evidence of either classical theory or an implausible theory.

  6. Interoperability Policy Roadmap

    DTIC Science & Technology

    2010-01-01

    Retrieval – SMART The technique developed by Dr. Gerard Salton for automated information retrieval and text analysis is called the vector-space... Salton , G., Wong, A., Yang, C.S., “A Vector Space Model for Automatic Indexing”, Commu- nications of the ACM, 18, 613-620. [10] Salton , G., McGill

  7. Application of Hyperspectal Techniques to Monitoring & Management of Invasive Plant Species Infestation

    DTIC Science & Technology

    2008-01-09

    The image data as acquired from the sensor is a data cloud in multi- dimensional space with each band generating an axis of dimension. When the data... The color of a material is defined by the direction of its unit vector in n- dimensional spectral space . The length of the vector relates only to how...to n- dimensional space . SAM determines the similarity

  8. Development of a NEW Vector Magnetograph at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    West, Edward; Hagyard, Mona; Gary, Allen; Smith, James; Adams, Mitzi; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    This paper will describe the Experimental Vector Magnetograph that has been developed at the Marshall Space Flight Center (MSFC). This instrument was designed to improve linear polarization measurements by replacing electro-optic and rotating waveplate modulators with a rotating linear analyzer. Our paper will describe the motivation for developing this magnetograph, compare this instrument with traditional magnetograph designs, and present a comparison of the data acquired by this instrument and original MSFC vector magnetograph.

  9. Reconstruction of fetal vector electrocardiogram from maternal abdominal signals under fetus body rotations.

    PubMed

    Nabeshima, Yuji; Kimura, Yoshitaka; Ito, Takuro; Ohwada, Kazunari; Karashima, Akihiro; Katayama, Norihiro; Nakao, Mitsuyuki

    2013-01-01

    Fetal electrocardiogram (fECG) and its vector form (fVECG) could provide significant clinical information concerning physiological conditions of a fetus. So far various independent component analysis (ICA)-based methods for extracting fECG from maternal abdominal signals have been proposed. Because full extraction of component waves such as P, Q, R, S, and T, is difficult to be realized under noisy and nonstationary situations, the fVECG is further hard to be reconstructed, where different projections of the fetal heart vector are required. In order to reconstruct fVECG, we proposed a novel method for synthesizing different projections of the heart vector, making good use of the fetus movement. This method consists of ICA, estimation of rotation angles of fetus, and synthesis of projections of the heart vector. Through applications to the synthetic and actual data, our method is shown to precisely estimate rotation angle of the fetus and to successfully reconstruct the fVECG.

  10. A Turn-Projected State-Based Conflict Resolution Algorithm

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Lewis, Timothy A.

    2013-01-01

    State-based conflict detection and resolution (CD&R) algorithms detect conflicts and resolve them on the basis on current state information without the use of additional intent information from aircraft flight plans. Therefore, the prediction of the trajectory of aircraft is based solely upon the position and velocity vectors of the traffic aircraft. Most CD&R algorithms project the traffic state using only the current state vectors. However, the past state vectors can be used to make a better prediction of the future trajectory of the traffic aircraft. This paper explores the idea of using past state vectors to detect traffic turns and resolve conflicts caused by these turns using a non-linear projection of the traffic state. A new algorithm based on this idea is presented and validated using a fast-time simulator developed for this study.

  11. Representation of magnetic fields in space

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1975-01-01

    Several methods by which a magnetic field in space can be represented are reviewed with particular attention to problems of the observed geomagnetic field. Time dependence is assumed to be negligible, and five main classes of representation are described by vector potential, scalar potential, orthogonal vectors, Euler potentials, and expanded magnetic field.

  12. Knowledge Space: A Conceptual Basis for the Organization of Knowledge

    ERIC Educational Resources Information Center

    Meincke, Peter P. M.; Atherton, Pauline

    1976-01-01

    Proposes a new conceptual basis for visualizing the organization of information, or knowledge, which differentiates between the concept "vectors" for a field of knowledge represented in a multidimensional space, and the state "vectors" for a person based on his understanding of these concepts, and the representational…

  13. Anisotropic fractal media by vector calculus in non-integer dimensional space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarasov, Vasily E., E-mail: tarasov@theory.sinp.msu.ru

    2014-08-15

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensionalmore » space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.« less

  14. Torsion axial vector and Yvon-Takabayashi angle: zitterbewegung, chirality and all that

    NASA Astrophysics Data System (ADS)

    Fabbri, Luca; da Rocha, Roldão

    2018-03-01

    We consider propagating torsion as a completion of gravitation in order to describe the dynamics of curved-twisted space-times filled with Dirac spinorial fields; we discuss interesting relationships of the torsion axial vector and the curvature tensor with the Yvon-Takabayashi angle and the module of the spinor field, that is the two degrees of freedom of the spinor field itself: in particular, we shall discuss in what way the torsion axial vector could be seen as the potential of a specific interaction of the Yvon-Takabayashi angle, and therefore as a force between the two chiral projections of the spinor field itself. Chiral interactions of the components of a spinor may render effects of zitterbewegung, as well as effective mass terms and other related features: we shall briefly sketch some of the analogies and differences with the similar but not identical situation given by the Yukawa interaction occurring in the Higgs sector of the standard model. We will provide some overall considerations about general consequences for contemporary physics, consequences that have never been discussed before, so far as we are aware, in the present physics literature.

  15. Color TV: total variation methods for restoration of vector-valued images.

    PubMed

    Blomgren, P; Chan, T F

    1998-01-01

    We propose a new definition of the total variation (TV) norm for vector-valued functions that can be applied to restore color and other vector-valued images. The new TV norm has the desirable properties of 1) not penalizing discontinuities (edges) in the image, 2) being rotationally invariant in the image space, and 3) reducing to the usual TV norm in the scalar case. Some numerical experiments on denoising simple color images in red-green-blue (RGB) color space are presented.

  16. A Vector Approach to Euclidean Geometry: Inner Product Spaces, Euclidean Geometry and Trigonometry, Volume 2. Teacher's Edition.

    ERIC Educational Resources Information Center

    Vaughan, Herbert E.; Szabo, Steven

    This is the teacher's edition of a text for the second year of a two-year high school geometry course. The course bases plane and solid geometry and trigonometry on the fact that the translations of a Euclidean space constitute a vector space which has an inner product. Congruence is a geometric topic reserved for Volume 2. Volume 2 opens with an…

  17. Effective Web and Desktop Retrieval with Enhanced Semantic Spaces

    NASA Astrophysics Data System (ADS)

    Daoud, Amjad M.

    We describe the design and implementation of the NETBOOK prototype system for collecting, structuring and efficiently creating semantic vectors for concepts, noun phrases, and documents from a corpus of free full text ebooks available on the World Wide Web. Automatic generation of concept maps from correlated index terms and extracted noun phrases are used to build a powerful conceptual index of individual pages. To ensure scalabilty of our system, dimension reduction is performed using Random Projection [13]. Furthermore, we present a complete evaluation of the relative effectiveness of the NETBOOK system versus the Google Desktop [8].

  18. A Cross-Lingual Similarity Measure for Detecting Biomedical Term Translations

    PubMed Central

    Bollegala, Danushka; Kontonatsios, Georgios; Ananiadou, Sophia

    2015-01-01

    Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks. PMID:26030738

  19. Vectors and Rotations in 3-Dimensions: Vector Algebra for the C++ Programmer

    DTIC Science & Technology

    2016-12-01

    Proving Ground, MD 21005-5068 This report describes 2 C++ classes: a Vector class for performing vector algebra in 3-dimensional space ( 3D ) and a Rotation...class for performing rotations of vectors in 3D . Each class is self-contained in a single header file (Vector.h and Rotation.h) so that a C...vector, rotation, 3D , quaternion, C++ tools, rotation sequence, Euler angles, yaw, pitch, roll, orientation 98 Richard Saucier 410-278-6721Unclassified

  20. Pushing Memory Bandwidth Limitations Through Efficient Implementations of Block-Krylov Space Solvers on GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, M. A.; Strelchenko, Alexei; Vaquero, Alejandro

    Lattice quantum chromodynamics simulations in nuclear physics have benefited from a tremendous number of algorithmic advances such as multigrid and eigenvector deflation. These improve the time to solution but do not alleviate the intrinsic memory-bandwidth constraints of the matrix-vector operation dominating iterative solvers. Batching this operation for multiple vectors and exploiting cache and register blocking can yield a super-linear speed up. Block-Krylov solvers can naturally take advantage of such batched matrix-vector operations, further reducing the iterations to solution by sharing the Krylov space between solves. However, practical implementations typically suffer from the quadratic scaling in the number of vector-vector operations.more » Using the QUDA library, we present an implementation of a block-CG solver on NVIDIA GPUs which reduces the memory-bandwidth complexity of vector-vector operations from quadratic to linear. We present results for the HISQ discretization, showing a 5x speedup compared to highly-optimized independent Krylov solves on NVIDIA's SaturnV cluster.« less

  1. Observation of Polarization Vortices in Momentum Space

    NASA Astrophysics Data System (ADS)

    Zhang, Yiwen; Chen, Ang; Liu, Wenzhe; Hsu, Chia Wei; Wang, Bo; Guan, Fang; Liu, Xiaohan; Shi, Lei; Lu, Ling; Zi, Jian

    2018-05-01

    The vortex, a fundamental topological excitation featuring the in-plane winding of a vector field, is important in various areas such as fluid dynamics, liquid crystals, and superconductors. Although commonly existing in nature, vortices were observed exclusively in real space. Here, we experimentally observed momentum-space vortices as the winding of far-field polarization vectors in the first Brillouin zone of periodic plasmonic structures. Using homemade polarization-resolved momentum-space imaging spectroscopy, we mapped out the dispersion, lifetime, and polarization of all radiative states at the visible wavelengths. The momentum-space vortices were experimentally identified by their winding patterns in the polarization-resolved isofrequency contours and their diverging radiative quality factors. Such polarization vortices can exist robustly on any periodic systems of vectorial fields, while they are not captured by the existing topological band theory developed for scalar fields. Our work provides a new way for designing high-Q plasmonic resonances, generating vector beams, and studying topological photonics in the momentum space.

  2. Observation of Polarization Vortices in Momentum Space.

    PubMed

    Zhang, Yiwen; Chen, Ang; Liu, Wenzhe; Hsu, Chia Wei; Wang, Bo; Guan, Fang; Liu, Xiaohan; Shi, Lei; Lu, Ling; Zi, Jian

    2018-05-04

    The vortex, a fundamental topological excitation featuring the in-plane winding of a vector field, is important in various areas such as fluid dynamics, liquid crystals, and superconductors. Although commonly existing in nature, vortices were observed exclusively in real space. Here, we experimentally observed momentum-space vortices as the winding of far-field polarization vectors in the first Brillouin zone of periodic plasmonic structures. Using homemade polarization-resolved momentum-space imaging spectroscopy, we mapped out the dispersion, lifetime, and polarization of all radiative states at the visible wavelengths. The momentum-space vortices were experimentally identified by their winding patterns in the polarization-resolved isofrequency contours and their diverging radiative quality factors. Such polarization vortices can exist robustly on any periodic systems of vectorial fields, while they are not captured by the existing topological band theory developed for scalar fields. Our work provides a new way for designing high-Q plasmonic resonances, generating vector beams, and studying topological photonics in the momentum space.

  3. Holomorphic projections and Ramanujan’s mock theta functions

    PubMed Central

    Imamoğlu, Özlem; Raum, Martin; Richter, Olav K.

    2014-01-01

    We use spectral methods of automorphic forms to establish a holomorphic projection operator for tensor products of vector-valued harmonic weak Maass forms and vector-valued modular forms. We apply this operator to discover simple recursions for Fourier series coefficients of Ramanujan’s mock theta functions. PMID:24591582

  4. Naval Medical Research and Development News. Volume 7, Issue 10

    DTIC Science & Technology

    2015-10-01

    SR) product against adult Aedes aegypti the primary vector for DENV. The goal of this project is to obtain evidence that SRs lessen contact between...multi-site project designated to test the SR against the dengue vector Aedes aegypti. Four other sites will evaluate its impact against malarial

  5. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD).

    PubMed

    Bermúdez Ordoñez, Juan Carlos; Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando

    2018-05-16

    A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ 1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain.

  6. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD)

    PubMed Central

    Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando

    2018-01-01

    A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain. PMID:29772731

  7. Analysis of structural response data using discrete modal filters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.

    1991-01-01

    The application of reciprocal modal vectors to the analysis of structural response data is described. Reciprocal modal vectors are constructed using an existing experimental modal model and an existing frequency response matrix of a structure, and can be assembled into a matrix that effectively transforms the data from the physical space to a modal space within a particular frequency range. In other words, the weighting matrix necessary for modal vector orthogonality (typically the mass matrix) is contained within the reciprocal model matrix. The underlying goal of this work is mostly directed toward observing the modal state responses in the presence of unknown, possibly closed loop forcing functions, thus having an impact on both operating data analysis techniques and independent modal space control techniques. This study investigates the behavior of reciprocol modal vectors as modal filters with respect to certain calculation parameters and their performance with perturbed system frequency response data.

  8. Gamow-Teller response in the configuration space of a density-functional-theory-rooted no-core configuration-interaction model

    NASA Astrophysics Data System (ADS)

    Konieczka, M.; Kortelainen, M.; Satuła, W.

    2018-03-01

    Background: The atomic nucleus is a unique laboratory in which to study fundamental aspects of the electroweak interaction. This includes a question concerning in medium renormalization of the axial-vector current, which still lacks satisfactory explanation. Study of spin-isospin or Gamow-Teller (GT) response may provide valuable information on both the quenching of the axial-vector coupling constant as well as on nuclear structure and nuclear astrophysics. Purpose: We have performed a seminal calculation of the GT response by using the no-core configuration-interaction approach rooted in multireference density functional theory (DFT-NCCI). The model treats properly isospin and rotational symmetries and can be applied to calculate both the nuclear spectra and transition rates in atomic nuclei, irrespectively of their mass and particle-number parity. Methods: The DFT-NCCI calculation proceeds as follows: First, one builds a configuration space by computing relevant, for a given physical problem, (multi)particle-(multi)hole Slater determinants. Next, one applies the isospin and angular-momentum projections and performs the isospin and K mixing in order to construct a model space composed of linearly dependent states of good angular momentum. Eventually, one mixes the projected states by solving the Hill-Wheeler-Griffin equation. Results: The method is applied to compute the GT strength distribution in selected N ≈Z nuclei including the p -shell 8Li and 8Be nuclei and the s d -shell well-deformed nucleus 24Mg. In order to demonstrate a flexibility of the approach we present also a calculation of the superallowed GT β decay in doubly-magic spherical 100Sn and the low-spin spectrum in 100In. Conclusions: It is demonstrated that the DFT-NCCI model is capable of capturing the GT response satisfactorily well by using a relatively small configuration space, exhausting simultaneously the GT sum rule. The model, due to its flexibility and broad range of applicability, may either serve as a complement or even as an alternative to other theoretical approaches, including the conventional nuclear shell model.

  9. Modeling Musical Context With Word2Vec

    NASA Astrophysics Data System (ADS)

    Herremans, Dorien; Chuan, Ching-Hua

    2017-05-01

    We present a semantic vector space model for capturing complex polyphonic musical context. A word2vec model based on a skip-gram representation with negative sampling was used to model slices of music from a dataset of Beethoven's piano sonatas. A visualization of the reduced vector space using t-distributed stochastic neighbor embedding shows that the resulting embedded vector space captures tonal relationships, even without any explicit information about the musical contents of the slices. Secondly, an excerpt of the Moonlight Sonata from Beethoven was altered by replacing slices based on context similarity. The resulting music shows that the selected slice based on similar word2vec context also has a relatively short tonal distance from the original slice.

  10. Regional and seasonal response of a West Nile virus vector to climate change.

    PubMed

    Morin, Cory W; Comrie, Andrew C

    2013-09-24

    Climate change will affect the abundance and seasonality of West Nile virus (WNV) vectors, altering the risk of virus transmission to humans. Using downscaled general circulation model output, we calculate a WNV vector's response to climate change across the southern United States using process-based modeling. In the eastern United States, Culex quinquefasciatus response to projected climate change displays a latitudinal and elevational gradient. Projected summer population depressions as a result of increased immature mortality and habitat drying are most severe in the south and almost absent further north; extended spring and fall survival is ubiquitous. Much of California also exhibits a bimodal pattern. Projected onset of mosquito season is delayed in the southwestern United States because of extremely dry and hot spring and summers; however, increased temperature and late summer and fall rains extend the mosquito season. These results are unique in being a broad-scale calculation of the projected impacts of climate change on a WNV vector. The results show that, despite projected widespread future warming, the future seasonal response of C. quinquefasciatus populations across the southern United States will not be homogeneous, and will depend on specific combinations of local and regional conditions.

  11. Elimination of projection effects from vector magnetograms - The pre-flare configuration of active region AR 4474

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, P.; Hagyard, M. J.; Hathaway, D. H.

    1988-01-01

    A simple method of transforming vector magnetograms to heliographic coordinates is demonstrated. The merits of this transformation are illustrated using a vector magnetogram obtained with the MSFC vector magnetograph 80 minutes prior to a white light flare in active region AR 4474 on April 25, 1984. The original magnetogram shows strong magnetic shear along the neutral line at both the flare site and a nonflaring site. The transformation of the magnetogram to heliographic coordinates shows that the elimination of projection effects results in a much shorter length of the sheared region at the nonflaring site than what is inferred from the image plane vector magnetogram. The length of the sheared region at the flare site is relatively less affected by the transformation.

  12. A vector scanning processing technique for pulsed laser velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Edwards, Robert V.

    1989-01-01

    Pulsed laser sheet velocimetry yields nonintrusive measurements of two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high precision (1 pct) velocity estimates, but can require several hours of processing time on specialized array processors. Under some circumstances, a simple, fast, less accurate (approx. 5 pct), data reduction technique which also gives unambiguous velocity vector information is acceptable. A direct space domain processing technique was examined. The direct space domain processing technique was found to be far superior to any other techniques known, in achieving the objectives listed above. It employs a new data coding and reduction technique, where the particle time history information is used directly. Further, it has no 180 deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 minutes on an 80386 based PC, producing a 2-D velocity vector map of the flow field. Hence, using this new space domain vector scanning (VS) technique, pulsed laser velocimetry data can be reduced quickly and reasonably accurately, without specialized array processing hardware.

  13. Largescale Long-term particle Simulations of Runaway electrons in Tokamaks

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Qin, Hong; Wang, Yulei

    2016-10-01

    To understand runaway dynamical behavior is crucial to assess the safety of tokamaks. Though many important analytical and numerical results have been achieved, the overall dynamic behaviors of runaway electrons in a realistic tokamak configuration is still rather vague. In this work, the secular full-orbit simulations of runaway electrons are carried out based on a relativistic volume-preserving algorithm. Detailed phase-space behaviors of runaway electrons are investigated in different timescales spanning 11 orders. A detailed analysis of the collisionless neoclassical scattering is provided when considering the coupling between the rotation of momentum vector and the background field. In large timescale, the initial condition of runaway electrons in phase space globally influences the runaway distribution. It is discovered that parameters and field configuration of tokamaks can modify the runaway electron dynamics significantly. Simulations on 10 million cores of supercomputer using the APT code have been completed. A resolution of 107 in phase space is used, and simulations are performed for 1011 time steps. Largescale simulations show that in a realistic fusion reactor, the concern of runaway electrons is not as serious as previously thought. This research was supported by National Magnetic Connement Fusion Energy Research Project (2015GB111003, 2014GB124005), the National Natural Science Foundation of China (NSFC-11575185, 11575186) and the GeoAlgorithmic Plasma Simulator (GAPS) Project.

  14. Geometric Representations of Condition Queries on Three-Dimensional Vector Fields

    NASA Technical Reports Server (NTRS)

    Henze, Chris

    1999-01-01

    Condition queries on distributed data ask where particular conditions are satisfied. It is possible to represent condition queries as geometric objects by plotting field data in various spaces derived from the data, and by selecting loci within these derived spaces which signify the desired conditions. Rather simple geometric partitions of derived spaces can represent complex condition queries because much complexity can be encapsulated in the derived space mapping itself A geometric view of condition queries provides a useful conceptual unification, allowing one to intuitively understand many existing vector field feature detection algorithms -- and to design new ones -- as variations on a common theme. A geometric representation of condition queries also provides a simple and coherent basis for computer implementation, reducing a wide variety of existing and potential vector field feature detection techniques to a few simple geometric operations.

  15. Research on spatial-variant property of bistatic ISAR imaging plane of space target

    NASA Astrophysics Data System (ADS)

    Guo, Bao-Feng; Wang, Jun-Ling; Gao, Mei-Guo

    2015-04-01

    The imaging plane of inverse synthetic aperture radar (ISAR) is the projection plane of the target. When taking an image using the range-Doppler theory, the imaging plane may have a spatial-variant property, which causes the change of scatter’s projection position and results in migration through resolution cells. In this study, we focus on the spatial-variant property of the imaging plane of a three-axis-stabilized space target. The innovative contributions are as follows. 1) The target motion model in orbit is provided based on a two-body model. 2) The instantaneous imaging plane is determined by the method of vector analysis. 3) Three Euler angles are introduced to describe the spatial-variant property of the imaging plane, and the image quality is analyzed. The simulation results confirm the analysis of the spatial-variant property. The research in this study is significant for the selection of the imaging segment, and provides the evidence for the following data processing and compensation algorithm. Project supported by the National Natural Science Foundation of China (Grant No. 61401024), the Shanghai Aerospace Science and Technology Innovation Foundation, China (Grant No. SAST201240), and the Basic Research Foundation of Beijing Institute of Technology (Grant No. 20140542001).

  16. MALINA: a web service for visual analytics of human gut microbiota whole-genome metagenomic reads.

    PubMed

    Tyakht, Alexander V; Popenko, Anna S; Belenikin, Maxim S; Altukhov, Ilya A; Pavlenko, Alexander V; Kostryukova, Elena S; Selezneva, Oksana V; Larin, Andrei K; Karpova, Irina Y; Alexeev, Dmitry G

    2012-12-07

    MALINA is a web service for bioinformatic analysis of whole-genome metagenomic data obtained from human gut microbiota sequencing. As input data, it accepts metagenomic reads of various sequencing technologies, including long reads (such as Sanger and 454 sequencing) and next-generation (including SOLiD and Illumina). It is the first metagenomic web service that is capable of processing SOLiD color-space reads, to authors' knowledge. The web service allows phylogenetic and functional profiling of metagenomic samples using coverage depth resulting from the alignment of the reads to the catalogue of reference sequences which are built into the pipeline and contain prevalent microbial genomes and genes of human gut microbiota. The obtained metagenomic composition vectors are processed by the statistical analysis and visualization module containing methods for clustering, dimension reduction and group comparison. Additionally, the MALINA database includes vectors of bacterial and functional composition for human gut microbiota samples from a large number of existing studies allowing their comparative analysis together with user samples, namely datasets from Russian Metagenome project, MetaHIT and Human Microbiome Project (downloaded from http://hmpdacc.org). MALINA is made freely available on the web at http://malina.metagenome.ru. The website is implemented in JavaScript (using Ext JS), Microsoft .NET Framework, MS SQL, Python, with all major browsers supported.

  17. A note on φ-analytic conformal vector fields

    NASA Astrophysics Data System (ADS)

    Deshmukh, Sharief; Bin Turki, Nasser

    2017-09-01

    Taking clue from the analytic vector fields on a complex manifold, φ-analytic conformal vector fields are defined on a Riemannian manifold (Deshmukh and Al-Solamy in Colloq. Math. 112(1):157-161, 2008). In this paper, we use φ-analytic conformal vector fields to find new characterizations of the n-sphere Sn(c) and the Euclidean space (Rn,<,> ).

  18. Online Sequential Projection Vector Machine with Adaptive Data Mean Update

    PubMed Central

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958

  19. Online Sequential Projection Vector Machine with Adaptive Data Mean Update.

    PubMed

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  20. Color deconvolution. Optimizing handling of 3D unitary optical density vectors with polar coordinates.

    PubMed

    Bigras, Gilbert

    2012-06-01

    Color deconvolution relies on determination of unitary optical density vectors (OD(3D)) derived from pure constituent stains initially defined as intensity vectors in RGB space. OD(3D) can be defined in polar coordinates (phi, theta, radius); always being equal to one, radius can be ignored. Easier handling of unitary optical density 2D vectors (OD(2D)) is shown. OD(2D) pure stains used in anatomical pathology were assessed as centroid values (phi, theta) with a measure of variance: inertia based on arc lengths between centroid value and sampled points. These variables were plotted on a stereographic projection plane. In order to assess pure stains OD(2D), different methods of sampling RGB pixels were tested and compared: (2) direct sampling of nuclei from preparations using (a) composite H&E and (b) hematoxylin only and (2) for any pure stain RGB image, different associated 8-bit masks (saturation, brightness and RGB average) were used for sampling and compared. Behaviors of phi, theta and inertia were obtained by moving threshold in 8-bit mask histograms. Phi and theta stability were tested against variable light intensity during image acquisition and by using 2 different image acquisition systems. The more saturated RGB pixels are, the more stable phi, theta and inertia values are obtained. Different commercial hematoxylins have distinct OD(2D) characteristics. UltraView DAB stain shows high inertia and is angularly closer to usual counterstains than ultraView Red stain, which also has a lower inertia. Superior accuracy is expected from the latter stain. Phi and theta OD(2D) values are sensitive to light intensity variation, to the used imaging system and to the used objectives. An ImageJ plugin was designed to plot and interactively modify OD(2D) values with instant update of color deconvolution allowing heuristic segmentation. Utilization of polar OD(2D) eases statistical characterization of OD(3D) vectors: conditions of optimal sampling were demonstrated and various factors influencing OD(2D) stability were explored. Stereographic projection plane allows intuitive visualization of OD(3D) vectors as well as heuristic vectorial modification. All findings are not restricted to anatomical pathology but can be applied to bright field microscopy and subtractive color applications in general.

  1. Changes of Space Debris Orbits After LDR Operation

    NASA Astrophysics Data System (ADS)

    Wnuk, E.; Golebiewska, J.; Jacquelard, C.; Haag, H.

    2013-09-01

    A lot of technical studies are currently developing concepts of active removal of space debris to protect space assets from on orbit collision. For small objects, such concepts include the use of ground-based lasers to remove or reduce the momentum of the objects thereby lowering their orbit in order to facilitate their decay by re-entry into the Earth's atmosphere. The concept of the Laser Debris Removal (LDR) system is the main subject of the CLEANSPACE project. One of the CLEANSPACE objectives is to define a global architecture (including surveillance, identification and tracking) for an innovative ground-based laser solution, which can remove hazardous medium debris around selected space assets. The CLEANSPACE project is realized by a European consortium in the frame of the European Commission Seventh Framework Programme (FP7), Space topic. The use of sequence of laser operations to remove space debris, needs very precise predictions of future space debris orbital positions, on a level even better than 1 meter. Orbit determination, tracking (radar, optical and laser) and orbit prediction have to be performed with accuracy much better than so far. For that, the applied prediction tools have to take into account all perturbation factors that influence object orbit. The expected object's trajectory after the LDR operation is a lowering of its perigee. To prevent the debris with this new trajectory to collide with another object, a precise trajectory prediction after the LDR sequence is therefore the main task allowing also to estimate re-entry parameters. The LDR laser pulses change the debris object velocity v. The future orbit and re-entry parameters of the space debris after the LDR engagement can be calculated if the resulting ?v vector is known with the sufficient accuracy. The value of the ?v may be estimated from the parameters of the LDR station and from the characteristics of the orbital debris. However, usually due to the poor knowledge of the debris object's size, mass, spin and chemical composition the value and the direction of the vector ?v cannot be estimated with the high accuracy. Therefore, a high precise tracking of the debris will be necessary immediately before the engagement of the LDR and also during this engagement. By extending this tracking and ranging for a few seconds after engagement, the necessary data to evaluate the orbital modification can be produced in the same way as it is done for the catalogue generation. In our paper we discuss the object's orbit changes due to LDR operation for different locations of LDR station and different parameters of the laser energy and telescope diameter. We estimate the future orbit and re-entry parameters taking into account the influence of all important perturbation factors on the space debris orbital motion after LDR.

  2. Estimated effects of projected climate change on the basic reproductive number of the Lyme disease vector Ixodes scapularis.

    PubMed

    Ogden, Nicholas H; Radojevic, Milka; Wu, Xiaotian; Duvvuri, Venkata R; Leighton, Patrick A; Wu, Jianhong

    2014-06-01

    The extent to which climate change may affect human health by increasing risk from vector-borne diseases has been under considerable debate. We quantified potential effects of future climate change on the basic reproduction number (R0) of the tick vector of Lyme disease, Ixodes scapularis, and explored their importance for Lyme disease risk, and for vector-borne diseases in general. We applied observed temperature data for North America and projected temperatures using regional climate models to drive an I. scapularis population model to hindcast recent, and project future, effects of climate warming on R0. Modeled R0 increases were compared with R0 ranges for pathogens and parasites associated with variations in key ecological and epidemiological factors (obtained by literature review) to assess their epidemiological importance. R0 for I. scapularis in North America increased during the years 1971-2010 in spatio-temporal patterns consistent with observations. Increased temperatures due to projected climate change increased R0 by factors (2-5 times in Canada and 1.5-2 times in the United States), comparable to observed ranges of R0 for pathogens and parasites due to variations in strains, geographic locations, epidemics, host and vector densities, and control efforts. Climate warming may have co-driven the emergence of Lyme disease in northeastern North America, and in the future may drive substantial disease spread into new geographic regions and increase tick-borne disease risk where climate is currently suitable. Our findings highlight the potential for climate change to have profound effects on vectors and vector-borne diseases, and the need to refocus efforts to understand these effects.

  3. Pattern formation in superdiffusion Oregonator model

    NASA Astrophysics Data System (ADS)

    Feng, Fan; Yan, Jia; Liu, Fu-Cheng; He, Ya-Feng

    2016-10-01

    Pattern formations in an Oregonator model with superdiffusion are studied in two-dimensional (2D) numerical simulations. Stability analyses are performed by applying Fourier and Laplace transforms to the space fractional reaction-diffusion systems. Antispiral, stable turing patterns, and travelling patterns are observed by changing the diffusion index of the activator. Analyses of Floquet multipliers show that the limit cycle solution loses stability at the wave number of the primitive vector of the travelling hexagonal pattern. We also observed a transition between antispiral and spiral by changing the diffusion index of the inhibitor. Project supported by the National Natural Science Foundation of China (Grant Nos. 11205044 and 11405042), the Research Foundation of Education Bureau of Hebei Province, China (Grant Nos. Y2012009 and ZD2015025), the Program for Young Principal Investigators of Hebei Province, China, and the Midwest Universities Comprehensive Strength Promotion Project.

  4. Exploratory Model Analysis of the Space Based Infrared System (SBIRS) Low Global Scheduler Problem

    DTIC Science & Technology

    1999-12-01

    solution. The non- linear least squares model is defined as Y = f{e,t) where: 0 =M-element parameter vector Y =N-element vector of all data t...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS EXPLORATORY MODEL ANALYSIS OF THE SPACE BASED INFRARED SYSTEM (SBIRS) LOW GLOBAL SCHEDULER...December 1999 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE EXPLORATORY MODEL ANALYSIS OF THE SPACE BASED INFRARED SYSTEM

  5. An Elementary Treatment of General Inner Products

    ERIC Educational Resources Information Center

    Graver, Jack E.

    2011-01-01

    A typical first course on linear algebra is usually restricted to vector spaces over the real numbers and the usual positive-definite inner product. Hence, the proof that dim(S)+ dim(S[perpendicular]) = dim("V") is not presented in a way that is generalizable to non-positive?definite inner products or to vector spaces over other fields. In this…

  6. Wigner functions on non-standard symplectic vector spaces

    NASA Astrophysics Data System (ADS)

    Dias, Nuno Costa; Prata, João Nuno

    2018-01-01

    We consider the Weyl quantization on a flat non-standard symplectic vector space. We focus mainly on the properties of the Wigner functions defined therein. In particular we show that the sets of Wigner functions on distinct symplectic spaces are different but have non-empty intersections. This extends previous results to arbitrary dimension and arbitrary (constant) symplectic structure. As a by-product we introduce and prove several concepts and results on non-standard symplectic spaces which generalize those on the standard symplectic space, namely, the symplectic spectrum, Williamson's theorem, and Narcowich-Wigner spectra. We also show how Wigner functions on non-standard symplectic spaces behave under the action of an arbitrary linear coordinate transformation.

  7. The combined geodetic network adjusted on the reference ellipsoid - a comparison of three functional models for GNSS observations

    NASA Astrophysics Data System (ADS)

    Kadaj, Roman

    2016-12-01

    The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS observations.

  8. Multiple site receptor modeling with a minimal spanning tree combined with a Kohonen neural network

    NASA Astrophysics Data System (ADS)

    Hopke, Philip K.

    1999-12-01

    A combination of two pattern recognition methods has been developed that allows the generation of geographical emission maps form multivariate environmental data. In such a projection into a visually interpretable subspace by a Kohonen Self-Organizing Feature Map, the topology of the higher dimensional variables space can be preserved, but parts of the information about the correct neighborhood among the sample vectors will be lost. This can partly be compensated for by an additional projection of Prim's Minimal Spanning Tree into the trained neural network. This new environmental receptor modeling technique has been adapted for multiple sampling sites. The behavior of the method has been studied using simulated data. Subsequently, the method has been applied to mapping data sets from the Southern California Air Quality Study. The projection of a 17 chemical variables measured at up to 8 sampling sites provided a 2D, visually interpretable, geometrically reasonable arrangement of air pollution source sin the South Coast Air Basin.

  9. Effective Numerical Methods for Solving Elliptical Problems in Strengthened Sobolev Spaces

    NASA Technical Reports Server (NTRS)

    D'yakonov, Eugene G.

    1996-01-01

    Fourth-order elliptic boundary value problems in the plane can be reduced to operator equations in Hilbert spaces G that are certain subspaces of the Sobolev space W(sub 2)(exp 2)(Omega) is identical with G(sup (2)). Appearance of asymptotically optimal algorithms for Stokes type problems made it natural to focus on an approach that considers rot w is identical with (D(sub 2)w - D(sub 1)w) is identical with vector of u as a new unknown vector-function, which automatically satisfies the condition div vector of u = 0. In this work, we show that this approach can also be developed for an important class of problems from the theory of plates and shells with stiffeners. The main mathematical problem was to show that the well-known inf-sup condition (normal solvability of the divergence operator) holds for special Hilbert spaces. This result is also essential for certain hydrodynamics problems.

  10. Thrust vector control using electric actuation

    NASA Astrophysics Data System (ADS)

    Bechtel, Robert T.; Hall, David K.

    1995-01-01

    Presently, gimbaling of launch vehicle engines for thrust vector control is generally accomplished using a hydraulic system. In the case of the space shuttle solid rocket boosters and main engines, these systems are powered by hydrazine auxiliary power units. Use of electromechanical actuators would provide significant advantages in cost and maintenance. However, present energy source technologies such as batteries are heavy to the point of causing significant weight penalties. Utilizing capacitor technology developed by the Auburn University Space Power Institute in collaboration with the Auburn CCDS, Marshall Space Flight Center (MSFC) and Auburn are developing EMA system components with emphasis on high discharge rate energy sources compatible with space shuttle type thrust vector control requirements. Testing has been done at MSFC as part of EMA system tests with loads up to 66000 newtons for pulse times of several seconds. Results show such an approach to be feasible providing a potential for reduced weight and operations costs for new launch vehicles.

  11. Dynamic analysis of suspension cable based on vector form intrinsic finite element method

    NASA Astrophysics Data System (ADS)

    Qin, Jian; Qiao, Liang; Wan, Jiancheng; Jiang, Ming; Xia, Yongjun

    2017-10-01

    A vector finite element method is presented for the dynamic analysis of cable structures based on the vector form intrinsic finite element (VFIFE) and mechanical properties of suspension cable. Firstly, the suspension cable is discretized into different elements by space points, the mass and external forces of suspension cable are transformed into space points. The structural form of cable is described by the space points at different time. The equations of motion for the space points are established according to the Newton’s second law. Then, the element internal forces between the space points are derived from the flexible truss structure. Finally, the motion equations of space points are solved by the central difference method with reasonable time integration step. The tangential tension of the bearing rope in a test ropeway with the moving concentrated loads is calculated and compared with the experimental data. The results show that the tangential tension of suspension cable with moving loads is consistent with the experimental data. This method has high calculated precision and meets the requirements of engineering application.

  12. Reducing vector-borne disease by empowering farmers in integrated vector management.

    PubMed

    van den Berg, Henk; von Hildebrand, Alexander; Ragunathan, Vaithilingam; Das, Pradeep K

    2007-07-01

    Irrigated agriculture exposes rural people to health risks associated with vector-borne diseases and pesticides used in agriculture and for public health protection. Most developing countries lack collaboration between the agricultural and health sectors to jointly address these problems. We present an evaluation of a project that uses the "farmer field school" method to teach farmers how to manage vector-borne diseases and how to improve rice yields. Teaching farmers about these two concepts together is known as "integrated pest and vector management". An intersectoral project targeting rice irrigation systems in Sri Lanka. Project partners developed a new curriculum for the field school that included a component on vector-borne diseases. Rice farmers in intervention villages who graduated from the field school took vector-control actions as well as improving environmental sanitation and their personal protection measures against disease transmission. They also reduced their use of agricultural pesticides, especially insecticides. The intervention motivated and enabled rural people to take part in vector-management activities and to reduce several environmental health risks. There is scope for expanding the curriculum to include information on the harmful effects of pesticides on human health and to address other public health concerns. Benefits of this approach for community-based health programmes have not yet been optimally assessed. Also, the institutional basis of the integrated management approach needs to be broadened so that people from a wider range of organizations take part. A monitoring and evaluation system needs to be established to measure the performance of integrated management initiatives.

  13. Vector magnetic fields in sunspots. I - Stokes profile analysis using the Marshall Space Flight Center magnetograph

    NASA Technical Reports Server (NTRS)

    Balasubramaniam, K. S.; West, E. A.

    1991-01-01

    The Marshall Space Flight Center (MSFC) vector magnetograph is a tunable filter magnetograph with a bandpass of 125 mA. Results are presented of the inversion of Stokes polarization profiles observed with the MSFC vector magnetograph centered on a sunspot to recover the vector magnetic field parameters and thermodynamic parameters of the spectral line forming region using the Fe I 5250.2 A spectral line using a nonlinear least-squares fitting technique. As a preliminary investigation, it is also shown that the recovered thermodynamic parameters could be better understood if the fitted parameters like Doppler width, opacity ratio, and damping constant were broken down into more basic quantities like temperature, microturbulent velocity, or density parameter.

  14. Strategies for targeting primate neural circuits with viral vectors

    PubMed Central

    El-Shamayleh, Yasmine; Ni, Amy M.

    2016-01-01

    Understanding how the brain works requires understanding how different types of neurons contribute to circuit function and organism behavior. Progress on this front has been accelerated by optogenetics and chemogenetics, which provide an unprecedented level of control over distinct neuronal types in small animals. In primates, however, targeting specific types of neurons with these tools remains challenging. In this review, we discuss existing and emerging strategies for directing genetic manipulations to targeted neurons in the adult primate central nervous system. We review the literature on viral vectors for gene delivery to neurons, focusing on adeno-associated viral vectors and lentiviral vectors, their tropism for different cell types, and prospects for new variants with improved efficacy and selectivity. We discuss two projection targeting approaches for probing neural circuits: anterograde projection targeting and retrograde transport of viral vectors. We conclude with an analysis of cell type-specific promoters and other nucleotide sequences that can be used in viral vectors to target neuronal types at the transcriptional level. PMID:27052579

  15. Characteristic classes of gauge systems

    NASA Astrophysics Data System (ADS)

    Lyakhovich, S. L.; Sharapov, A. A.

    2004-12-01

    We define and study invariants which can be uniformly constructed for any gauge system. By a gauge system we understand an (anti-)Poisson supermanifold provided with an odd Hamiltonian self-commuting vector field called a homological vector field. This definition encompasses all the cases usually included into the notion of a gauge theory in physics as well as some other similar (but different) structures like Lie or Courant algebroids. For Lagrangian gauge theories or Hamiltonian first class constrained systems, the homological vector field is identified with the classical BRST transformation operator. We define characteristic classes of a gauge system as universal cohomology classes of the homological vector field, which are uniformly constructed in terms of this vector field itself. Not striving to exhaustively classify all the characteristic classes in this work, we compute those invariants which are built up in terms of the first derivatives of the homological vector field. We also consider the cohomological operations in the space of all the characteristic classes. In particular, we show that the (anti-)Poisson bracket becomes trivial when applied to the space of all the characteristic classes, instead the latter space can be endowed with another Lie bracket operation. Making use of this Lie bracket one can generate new characteristic classes involving higher derivatives of the homological vector field. The simplest characteristic classes are illustrated by the examples relating them to anomalies in the traditional BV or BFV-BRST theory and to characteristic classes of (singular) foliations.

  16. Unidirectional Wave Vector Manipulation in Two-Dimensional Space with an All Passive Acoustic Parity-Time-Symmetric Metamaterials Crystal

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Zhu, Xuefeng; Chen, Fei; Liang, Shanjun; Zhu, Jie

    2018-03-01

    Exploring the concept of non-Hermitian Hamiltonians respecting parity-time symmetry with classical wave systems is of great interest as it enables the experimental investigation of parity-time-symmetric systems through the quantum-classical analogue. Here, we demonstrate unidirectional wave vector manipulation in two-dimensional space, with an all passive acoustic parity-time-symmetric metamaterials crystal. The metamaterials crystal is constructed through interleaving groove- and holey-structured acoustic metamaterials to provide an intrinsic parity-time-symmetric potential that is two-dimensionally extended and curved, which allows the flexible manipulation of unpaired wave vectors. At the transition point from the unbroken to broken parity-time symmetry phase, the unidirectional sound focusing effect (along with reflectionless acoustic transparency in the opposite direction) is experimentally realized over the spectrum. This demonstration confirms the capability of passive acoustic systems to carry the experimental studies on general parity-time symmetry physics and further reveals the unique functionalities enabled by the judiciously tailored unidirectional wave vectors in space.

  17. On Anholonomic Deformation, Geometry, and Differentiation

    DTIC Science & Technology

    2013-02-01

    αβχ are not necessarily Levi - Civita connection coefficients). The vector cross product × obeys, for two vectors V and W and two covectors α and β , V...three-dimensional space. 2.2.5. Euclidean space. Let GAB(X ) = GA · GB be the metric tensor of the space. The Levi - Civita connection coefficients of GAB...curvature tensor of the Levi - Civita connection vanishes identically: G R A BCD = 2 ( ∂[B G A C]D + G A[B|E|G EC]D ) = 0. (43) In n

  18. Differential Calculus on h-Deformed Spaces

    NASA Astrophysics Data System (ADS)

    Herlemont, Basile; Ogievetsky, Oleg

    2017-10-01

    We construct the rings of generalized differential operators on the h-deformed vector space of gl-type. In contrast to the q-deformed vector space, where the ring of differential operators is unique up to an isomorphism, the general ring of h-deformed differential operators {Diff}_{h},σ(n) is labeled by a rational function σ in n variables, satisfying an over-determined system of finite-difference equations. We obtain the general solution of the system and describe some properties of the rings {Diff}_{h},σ(n).

  19. Support vector machine based decision for mechanical fault condition monitoring in induction motor using an advanced Hilbert-Park transform.

    PubMed

    Ben Salem, Samira; Bacha, Khmais; Chaari, Abdelkader

    2012-09-01

    In this work we suggest an original fault signature based on an improved combination of Hilbert and Park transforms. Starting from this combination we can create two fault signatures: Hilbert modulus current space vector (HMCSV) and Hilbert phase current space vector (HPCSV). These two fault signatures are subsequently analysed using the classical fast Fourier transform (FFT). The effects of mechanical faults on the HMCSV and HPCSV spectrums are described, and the related frequencies are determined. The magnitudes of spectral components, relative to the studied faults (air-gap eccentricity and outer raceway ball bearing defect), are extracted in order to develop the input vector necessary for learning and testing the support vector machine with an aim of classifying automatically the various states of the induction motor. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Evolution of Lamb Vector as a Vortex Breaking into Turbulence.

    NASA Astrophysics Data System (ADS)

    Wu, J. Z.; Lu, X. Y.

    1996-11-01

    In an incompressible flow, either laminar or turbulent, the Lamb vector is solely responsible to nonlinear interactions. While its longitudinal part is balanced by stagnation enthalpy, its transverse part is the unique source (as an external forcing in spectral space) that causes the flow to evolve. Moreover, in Reynolds-averaged flows the turbulent force can be derived exclusively from the Lamb vector instead of the full Reynolds stress tensor. Therefore, studying the evolution of the Lamb vector itself (both longitudinal and transverse parts) is of great interest. We have numerically examined this problem, taking the nonlinear distabilization of a viscous vortex as an example. In the later stage of this evolution we introduced a forcing to keep a statistically steady state, and observed the Lamb vector behavior in the resulting fine turbulence. The result is presented in both physical and spectral spaces.

  1. Optoelectronic Inner-Product Neural Associative Memory

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1993-01-01

    Optoelectronic apparatus acts as artificial neural network performing associative recall of binary images. Recall process is iterative one involving optical computation of inner products between binary input vector and one or more reference binary vectors in memory. Inner-product method requires far less memory space than matrix-vector method.

  2. Identification and Optimization of New Leads for Malaria Vector Control.

    PubMed

    Hueter, Ottmar F; Hoppé, Mark; Wege, Philip; Maienfisch, Peter

    2016-10-01

    A significant proportion of the world's population remains at risk from malaria, and whilst great progress has been made in reducing the number of malaria cases globally through the use of vector control insecticides, these gains are under threat from the emergence of insecticide resistance. The spread of resistance in the vector populations, principally to pyrethroids, is driving the need for the development of new tools for malaria vector control. In order to identify new leads 30,000 compounds from the Syngenta corporate chemical collection were tested in a newly developed screening platform. More than 3000 compounds (10%) showed activity at ≤200 mg active ingredient (AI) litre -1 against Anopheles stephensi. Further evaluation resulted in the identification of 12 viable leads for the control of adult mosquitoes, most originating from current or former insecticide projects. Surprisingly, one of these leads emerged from a former PPO herbicide project and one from a former complex III fungicide project. This indicates that representatives of certain herbicide and fungicide projects and modes of action can also represent a valuable source of leads for malaria vector control. Optimization of the diphenyl ether lead 1 resulted in the identification of the cyano-pyridyl compound 31. This compound 31 exhibits good activity against mosquito species including rdl resistant Anopheles. It is only slightly weaker than permethrin and does not show relevant levels of cross-resistance to the organochlorine insecticide dieldrin.

  3. Human pose tracking from monocular video by traversing an image motion mapped body pose manifold

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2010-01-01

    Tracking human pose from monocular video sequences is a challenging problem due to the large number of independent parameters affecting image appearance and nonlinear relationships between generating parameters and the resultant images. Unlike the current practice of fitting interpolation functions to point correspondences between underlying pose parameters and image appearance, we exploit the relationship between pose parameters and image motion flow vectors in a physically meaningful way. Change in image appearance due to pose change is realized as navigating a low dimensional submanifold of the infinite dimensional Lie group of diffeomorphisms of the two dimensional sphere S2. For small changes in pose, image motion flow vectors lie on the tangent space of the submanifold. Any observed image motion flow vector field is decomposed into the basis motion vector flow fields on the tangent space and combination weights are used to update corresponding pose changes in the different dimensions of the pose parameter space. Image motion flow vectors are largely invariant to style changes in experiments with synthetic and real data where the subjects exhibit variation in appearance and clothing. The experiments demonstrate the robustness of our method (within +/-4° of ground truth) to style variance.

  4. Manipulation of group-velocity-locked vector dissipative solitons and properties of the generated high-order vector soliton structure.

    PubMed

    Zhu, S N; Wu, Z C; Fu, S N; Zhao, L M

    2018-03-20

    Details of various composites of the projections originated from a fundamental group-velocity-locked vector dissipative soliton (GVLVDS) are both experimentally and numerically explored. By combining the projections from the orthogonal polarization components of the GVLVDS, a high-order vector soliton structure with a double-humped pulse profile along one polarization and a single-humped pulse profile along the orthogonal polarization can be observed. Moreover, by de-chirping the composite double-humped pulse, the time separation between the two humps is reduced from 15.36 ps to 1.28 ps, indicating that the frequency chirp of the GVLVDS contributes significantly to the shaping of the double-humped pulse profile.

  5. Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method

    NASA Astrophysics Data System (ADS)

    Asavaskulkiet, Krissada

    2018-04-01

    In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.

  6. A unified development of several techniques for the representation of random vectors and data sets

    NASA Technical Reports Server (NTRS)

    Bundick, W. T.

    1973-01-01

    Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.

  7. Binary black hole spacetimes with a helical Killing vector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Christian

    Binary black hole spacetimes with a helical Killing vector, which are discussed as an approximation for the early stage of a binary system, are studied in a projection formalism. In this setting the four-dimensional Einstein equations are equivalent to a three-dimensional gravitational theory with a SL(2,R)/SO(1,1) sigma model as the material source. The sigma model is determined by a complex Ernst equation. 2+1 decompositions of the three-metric are used to establish the field equations on the orbit space of the Killing vector. The two Killing horizons of spherical topology which characterize the black holes, the cylinder of light where themore » Killing vector changes from timelike to spacelike, and infinity are singular points of the equations. The horizon and the light cylinder are shown to be regular singularities, i.e., the metric functions can be expanded in a formal power series in the vicinity. The behavior of the metric at spatial infinity is studied in terms of formal series solutions to the linearized Einstein equations. It is shown that the spacetime is not asymptotically flat in the strong sense to have a smooth null infinity under the assumption that the metric tends asymptotically to the Minkowski metric. In this case the metric functions have an oscillatory behavior in the radial coordinate in a nonaxisymmetric setting, the asymptotic multipoles are not defined. The asymptotic behavior of the Weyl tensor near infinity shows that there is no smooth null infinity.« less

  8. Light rays and the tidal gravitational pendulum

    NASA Astrophysics Data System (ADS)

    Farley, A. N. St J.

    2018-05-01

    Null geodesic deviation in classical general relativity is expressed in terms of a scalar function, defined as the invariant magnitude of the connecting vector between neighbouring light rays in a null geodesic congruence projected onto a two-dimensional screen space orthogonal to the rays, where λ is an affine parameter along the rays. We demonstrate that η satisfies a harmonic oscillator-like equation with a λ-dependent frequency, which comprises terms accounting for local matter affecting the congruence and tidal gravitational effects from distant matter or gravitational waves passing through the congruence, represented by the amplitude, of a complex Weyl driving term. Oscillating solutions for η imply the presence of conjugate or focal points along the rays. A polarisation angle, is introduced comprising the orientation of the connecting vector on the screen space and the phase, of the Weyl driving term. Interpreting β as the polarisation of a gravitational wave encountering the light rays, we consider linearly polarised waves in the first instance. A highly non-linear, second-order ordinary differential equation, (the tidal pendulum equation), is then derived, so-called due to its analogy with the equation describing a non-linear, variable-length pendulum oscillating under gravity. The variable pendulum length is represented by the connecting vector magnitude, whilst the acceleration due to gravity in the familiar pendulum formulation is effectively replaced by . A tidal torque interpretation is also developed, where the torque is expressed as a coupling between the moment of inertia of the pendulum and the tidal gravitational field. Precessional effects are briefly discussed. A solution to the tidal pendulum equation in terms of familiar gravitational lensing variables is presented. The potential emergence of chaos in general relativity is discussed in the context of circularly, elliptically or randomly polarised gravitational waves encountering the null congruence.

  9. The impact of climate change on the geographical distribution of two vectors of Chagas disease: implications for the force of infection

    PubMed Central

    Medone, Paula; Ceccarelli, Soledad; Parham, Paul E.; Figuera, Andreína; Rabinovich, Jorge E.

    2015-01-01

    Chagas disease, caused by the parasite Trypanosoma cruzi, is the most important vector-borne disease in Latin America. The vectors are insects belonging to the Triatominae (Hemiptera, Reduviidae), and are widely distributed in the Americas. Here, we assess the implications of climatic projections for 2050 on the geographical footprint of two of the main Chagas disease vectors: Rhodnius prolixus (tropical species) and Triatoma infestans (temperate species). We estimated the epidemiological implications of current to future transitions in the climatic niche in terms of changes in the force of infection (FOI) on the rural population of two countries: Venezuela (tropical) and Argentina (temperate). The climatic projections for 2050 showed heterogeneous impact on the climatic niches of both vector species, with a decreasing trend of suitability of areas that are currently at high-to-moderate transmission risk. Consequently, climatic projections affected differently the FOI for Chagas disease in Venezuela and Argentina. Despite the heterogeneous results, our main conclusions point out a decreasing trend in the number of new cases of Tr. cruzi human infections per year between current and future conditions using a climatic niche approach. PMID:25688019

  10. (New hosts and vectors for genome cloning)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The main goal of our project remains the development of new bacterial hosts and vectors for the stable propagation of human DNA clones in E. coli. During the past six months of our current budget period, we have (1) continued to develop new hosts that permit the stable maintenance of unstable features of human DNA, and (2) developed a series of vectors for (a) cloning large DNA inserts, (b) assessing the frequency of human sequences that are lethal to the growth of E. coli, and (c) assessing the stability of human sequences cloned in M13 for large-scale sequencing projects.

  11. [New hosts and vectors for genome cloning]. Progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The main goal of our project remains the development of new bacterial hosts and vectors for the stable propagation of human DNA clones in E. coli. During the past six months of our current budget period, we have (1) continued to develop new hosts that permit the stable maintenance of unstable features of human DNA, and (2) developed a series of vectors for (a) cloning large DNA inserts, (b) assessing the frequency of human sequences that are lethal to the growth of E. coli, and (c) assessing the stability of human sequences cloned in M13 for large-scale sequencing projects.

  12. Computational model of a vector-mediated epidemic

    NASA Astrophysics Data System (ADS)

    Dickman, Adriana Gomes; Dickman, Ronald

    2015-05-01

    We discuss a lattice model of vector-mediated transmission of a disease to illustrate how simulations can be applied in epidemiology. The population consists of two species, human hosts and vectors, which contract the disease from one another. Hosts are sedentary, while vectors (mosquitoes) diffuse in space. Examples of such diseases are malaria, dengue fever, and Pierce's disease in vineyards. The model exhibits a phase transition between an absorbing (infection free) phase and an active one as parameters such as infection rates and vector density are varied.

  13. Space Interferometry Mission: Dynamical Observations of Galaxies (SIMDOG)

    NASA Technical Reports Server (NTRS)

    Shaya, Edward J.; Borne, Kirk D.; Nusser, Adi; Peebles, P. J. E.; Tonry, John; Tully, Brent R.; Vogel, Stuart; Zaritsky, Dennis

    2004-01-01

    Space Interferometry Mission (SIM) will be used to obtain proper motions for a sample of 27 galaxies; the first proper motion measurements of galaxies beyond the satellite system of the Milky Way. SIM measurements lead to knowledge of the full 6-dimensional position and velocity vector of each galaxy. In conjunction with new gravitational flow models, the result will be the first total mass measurements of individual galaxies. The project, includes developnient of powerful theoretical methods for orbital calculations. This SIM study will lead to vastly improved determinations of individual galaxy masses, halo sizes, and the fractional contribution of dark matter. Astronomers have struggled to calculate the orbits of galaxies with only position and redshift information. Traditional N-body techniques are unsuitable for an analysis backward in time from a present distribution if any components of velocity or position are not very precisely known.

  14. Exterior spacecraft subsystem protective shielding analysis and design

    NASA Technical Reports Server (NTRS)

    Schonberg, William P.; Taylor, Roy A.

    1990-01-01

    All spacecraft are susceptible to impacts by meteoroids and pieces of orbiting space debris. An effective mechanism is developed to protect external spacecraft subsystems against damage by ricochet particles formed during such impacts. Equations and design procedures for protective shield panels are developed based on observed ricochet phenomena and calculated ricochet particle sizes and speeds. It is found that the diameter of the most damaging ricochet debris particle can be as large as 40 percent of the original project tile diameter, and can travel at speeds between 24 and 36 percent of the original projectile impact velocity. Panel dimensions are shown to be strongly dependent on their inclination to the impact velocity vector and on their distribution around a spacecraft module. It is concluded that obliquity effects of high-speed impacts must be considered in the design of any structure exposed to the meteoroid and space debris environment.

  15. The Infinitesimal Moduli Space of Heterotic G 2 Systems

    NASA Astrophysics Data System (ADS)

    de la Ossa, Xenia; Larfors, Magdalena; Svanes, Eirik E.

    2018-06-01

    Heterotic string compactifications on integrable G 2 structure manifolds Y with instanton bundles {(V,A), (TY,\\tilde{θ})} yield supersymmetric three-dimensional vacua that are of interest in physics. In this paper, we define a covariant exterior derivative D and show that it is equivalent to a heterotic G 2 system encoding the geometry of the heterotic string compactifications. This operator D acts on a bundle Q}=T^*Y \\oplus End(V) \\oplus End(TY)} and satisfies a nilpotency condition \\check{{D^2=0} , for an appropriate projection of D. Furthermore, we determine the infinitesimal moduli space of these systems and show that it corresponds to the finite-dimensional cohomology group H^1_{D}(Q). We comment on the similarities and differences of our result with Atiyah's well-known analysis of deformations of holomorphic vector bundles over complex manifolds. Our analysis leads to results that are of relevance to all orders in the {α'} expansion.

  16. Plasmonic nanopatch array for optical integrated circuit applications.

    PubMed

    Qu, Shi-Wei; Nie, Zai-Ping

    2013-11-08

    Future plasmonic integrated circuits with the capability of extremely high-speed data processing at optical frequencies will be dominated by the efficient optical emission (excitation) from (of) plasmonic waveguides. Towards this goal, plasmonic nanoantennas, currently a hot topic in the field of plasmonics, have potential to bridge the mismatch between the wave vector of free-space photonics and that of the guided plasmonics. To manipulate light at will, plasmonic nanoantenna arrays will definitely be more efficient than isolated nanoantennas. In this article, the concepts of microwave antenna arrays are applied to efficiently convert plasmonic waves in the plasmonic waveguides into free-space optical waves or vice versa. The proposed plasmonic nanoantenna array, with nanopatch antennas and a coupled wedge plasmon waveguide, can also act as an efficient spectrometer to project different wavelengths into different directions, or as a spatial filter to absorb a specific wavelength at a specified incident angle.

  17. Bayes linear covariance matrix adjustment

    NASA Astrophysics Data System (ADS)

    Wilkinson, Darren J.

    1995-12-01

    In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be amenable to a similar approach. Diagnostics for matrix adjustments are also discussed.

  18. Modeling Interferometric Structures with Birefringent Elements: A Linear Vector-Space Formalism

    DTIC Science & Technology

    2013-11-12

    Annapolis, Maryland ViNceNt J. Urick FraNk BUcholtz Photonics Technology Branch Optical Sciences Division i REPORT DOCUMENTATION PAGE Form...a Linear Vector-Space Formalism Nicholas J. Frigo,1 Vincent J. Urick , and Frank Bucholtz Naval Research Laboratory, Code 5650 4555 Overlook Avenue, SW...Annapolis, MD Unclassified Unlimited Unclassified Unlimited Unclassified Unlimited Unclassified Unlimited 29 Vincent J. Urick (202) 767-9352 Coupled mode

  19. On the n-symplectic structure of faithful irreducible representations

    NASA Astrophysics Data System (ADS)

    Norris, L. K.

    2017-04-01

    Each faithful irreducible representation of an N-dimensional vector space V1 on an n-dimensional vector space V2 is shown to define a unique irreducible n-symplectic structure on the product manifold V1×V2 . The basic details of the associated Poisson algebra are developed for the special case N = n2, and 2n-dimensional symplectic submanifolds are shown to exist.

  20. A phenomenological calculus of Wiener description space.

    PubMed

    Richardson, I W; Louie, A H

    2007-10-01

    The phenomenological calculus is a categorical example of Robert Rosen's modeling relation. This paper is an alligation of the phenomenological calculus and generalized harmonic analysis, another categorical example. Our epistemological exploration continues into the realm of Wiener description space, in which constitutive parameters are extended from vectors to vector-valued functions of a real variable. Inherent in the phenomenology are fundamental representations of time and nearness to equilibrium.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine

    This slide presentation describes basic topological concepts, including topological spaces, homeomorphisms, homotopy, betti numbers. Scalar field topology explores finding topological features and scalar field visualization, and vector field topology explores finding topological features and vector field visualization.

  2. Decomposition of group-velocity-locked-vector-dissipative solitons and formation of the high-order soliton structure by the product of their recombination.

    PubMed

    Wang, Xuan; Li, Lei; Geng, Ying; Wang, Hanxiao; Su, Lei; Zhao, Luming

    2018-02-01

    By using a polarization manipulation and projection system, we numerically decomposed the group-velocity-locked-vector-dissipative solitons (GVLVDSs) from a normal dispersion fiber laser and studied the combination of the projections of the phase-modulated components of the GVLVDS through a polarization beam splitter. Pulses with a structure similar to a high-order vector soliton could be obtained, which could be considered as a pseudo-high-order GVLVDS. It is found that, although GVLVDSs are intrinsically different from group-velocity-locked-vector solitons generated in fiber lasers operated in the anomalous dispersion regime, similar characteristics for the generation of pseudo-high-order GVLVDS are obtained. However, pulse chirp plays a significant role on the generation of pseudo-high-order GVLVDS.

  3. Vectors in Use in a 3D Juggling Game Simulation

    ERIC Educational Resources Information Center

    Kynigos, Chronis; Latsi, Maria

    2006-01-01

    The new representations enabled by the educational computer game the "Juggler" can place vectors in a central role both for controlling and measuring the behaviours of objects in a virtual environment simulating motion in three-dimensional spaces. The mathematical meanings constructed by 13 year-old students in relation to vectors as…

  4. 4 × 20 Gbit/s mode division multiplexing over free space using vector modes and a q-plate mode (de)multiplexer

    NASA Astrophysics Data System (ADS)

    Milione, Giovanni; Lavery, Martin P. J.; Huang, Hao; Ren, Yongxiong; Xie, Guodong; Nguyen, Thien An; Karimi, Ebrahim; Marrucci, Lorenzo; Nolan, Daniel A.; Alfano, Robert R.; Willner, Alan E.

    2015-05-01

    Vector modes are spatial modes that have spatially inhomogeneous states of polarization, such as, radial and azimuthal polarization. They can produce smaller spot sizes and stronger longitudinal polarization components upon focusing. As a result, they are used for many applications, including optical trapping and nanoscale imaging. In this work, vector modes are used to increase the information capacity of free space optical communication via the method of optical communication referred to as mode division multiplexing. A mode (de)multiplexer for vector modes based on a liquid crystal technology referred to as a q-plate is introduced. As a proof of principle, using the mode (de)multiplexer four vector modes each carrying a 20 Gbit/s quadrature phase shift keying signal on a single wavelength channel (~1550nm), comprising an aggregate 80 Gbit/s, were transmitted ~1m over the lab table with <-16.4 dB (<2%) mode crosstalk. Bit error rates for all vector modes were measured at the forward error correction threshold with power penalties < 3.41dB.

  5. Closed-Loop Simulation Study of the Ares I Upper Stage Thrust Vector Control Subsystem for Nominal and Failure Scenarios

    NASA Technical Reports Server (NTRS)

    Chicatelli, Amy; Fulton, Chris; Connolly, Joe; Hunker, Keith

    2010-01-01

    As a replacement to the current Shuttle, the Ares I rocket and Orion crew module are currently under development by the National Aeronautics and Space Administration (NASA). This new launch vehicle is segmented into major elements, one of which is the Upper Stage (US). The US is further broken down into subsystems, one of which is the Thrust Vector Control (TVC) subsystem which gimbals the US rocket nozzle. Nominal and off-nominal simulations for the US TVC subsystem are needed in order to support the development of software used for control systems and diagnostics. In addition, a clear and complete understanding of the effect of off-nominal conditions on the vehicle flight dynamics is desired. To achieve these goals, a simulation of the US TVC subsystem combined with the Ares I vehicle as developed. This closed-loop dynamic model was created using Matlab s Simulink and a modified version of a vehicle simulation, MAVERIC, which is currently used in the Ares I project and was developed by the Marshall Space Flight Center (MSFC). For this report, the effects on the flight trajectory of the Ares I vehicle are investigated after failures are injected into the US TVC subsystem. The comparisons of the off-nominal conditions observed in the US TVC subsystem with those of the Ares I vehicle flight dynamics are of particular interest.

  6. Application of Linear Discriminant Analysis in Dimensionality Reduction for Hand Motion Classification

    NASA Astrophysics Data System (ADS)

    Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.

    2012-01-01

    The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.

  7. Dengue Fever Occurrence and Vector Detection by Larval Survey, Ovitrap and MosquiTRAP: A Space-Time Clusters Analysis

    PubMed Central

    de Melo, Diogo Portella Ornelas; Scherrer, Luciano Rios; Eiras, Álvaro Eduardo

    2012-01-01

    The use of vector surveillance tools for preventing dengue disease requires fine assessment of risk, in order to improve vector control activities. Nevertheless, the thresholds between vector detection and dengue fever occurrence are currently not well established. In Belo Horizonte (Minas Gerais, Brazil), dengue has been endemic for several years. From January 2007 to June 2008, the dengue vector Aedes (Stegomyia) aegypti was monitored by ovitrap, the sticky-trap MosquiTRAP™ and larval surveys in an study area in Belo Horizonte. Using a space-time scan for clusters detection implemented in SaTScan software, the vector presence recorded by the different monitoring methods was evaluated. Clusters of vectors and dengue fever were detected. It was verified that ovitrap and MosquiTRAP vector detection methods predicted dengue occurrence better than larval survey, both spatially and temporally. MosquiTRAP and ovitrap presented similar results of space-time intersections to dengue fever clusters. Nevertheless ovitrap clusters presented longer duration periods than MosquiTRAP ones, less acuratelly signalizing the dengue risk areas, since the detection of vector clusters during most of the study period was not necessarily correlated to dengue fever occurrence. It was verified that ovitrap clusters occurred more than 200 days (values ranged from 97.0±35.35 to 283.0±168.4 days) before dengue fever clusters, whereas MosquiTRAP clusters preceded dengue fever clusters by approximately 80 days (values ranged from 65.5±58.7 to 94.0±14. 3 days), the former showing to be more temporally precise. Thus, in the present cluster analysis study MosquiTRAP presented superior results for signaling dengue transmission risks both geographically and temporally. Since early detection is crucial for planning and deploying effective preventions, MosquiTRAP showed to be a reliable tool and this method provides groundwork for the development of even more precise tools. PMID:22848729

  8. Vertical amplitude phase structure of a low-frequency acoustic field in shallow water

    NASA Astrophysics Data System (ADS)

    Kuznetsov, G. N.; Lebedev, O. V.; Stepanov, A. N.

    2016-11-01

    We obtain in integral and analytic form the relations for calculating the amplitude and phase characteristics of an interference structure of orthogonal projections of the oscillation velocity vector in shallow water. For different frequencies and receiver depths, we numerically study the source depth dependences of the effective phase velocities of an equivalent plane wave, the orthogonal projections of the sound pressure phase gradient, and the projections of the oscillation velocity vector. We establish that at low frequencies in zones of interference maxima, independently of source depth, weakly varying effective phase velocity values are observed, which exceed the sound velocity in water by 5-12%. We show that the angles of arrival of the equivalent plane wave and the oscillation velocity vector in the general case differ; however, they virtually coincide in the zone of the interference maximum of the sound pressure under the condition that the horizontal projections of the oscillation velocity appreciably exceed the value of the vertical projection. We give recommendations on using the sound field characteristics in zones with maximum values for solving rangefinding and signal-detection problems.

  9. Cost-effectiveness of environmental management for vector control in resource development projects.

    PubMed

    Bos, R

    1991-01-01

    Vector control methods are traditionally divided in chemical, biological and environmental management approaches, and this distinction also reflected in certain financial and economic aspects. This is particularly true for environmental modification, usually engineering or other structural works. It is highly capital intensive, as opposed to chemical and biological control which require recurrent expenditures, and discount rates are therefore a prominent consideration in deciding for one or the other approach. Environmental manipulation requires recurrent action, but can often be carried out with the community participation, which raises the issue of opportunity costs. The incorporation of environmental management in resource projects is generally impeded by economic considerations. The Internal Rate of Return continues to be a crucial criterion for funding agencies and development banks to support new projects; at the same time Governments of debt-riden countries in the Third World will do their best to avoid additional loans on such frills as environmental and health safeguards. Two approaches can be recommended to nevertheless ensure the incorporation of environmental management measures in resource projects in an affordable way. First, there are several examples of cases where environmental management measures either have a dual benefit (increasing both agricultural production and reducing vector-borne disease transmission) or can be implemented at zero costs. Second, the additional costs involved in structural modifications can be separated from the project development costs considered in the calculations of the Internal Rate of Return, and financial support can be sought from bilateral technical cooperation agencies particularly interested in environmental and health issues. There is a dearth of information in the cost-effectiveness of alternative vector control strategies in the developing country context. The process of integrating vector control in the general health services will make it even more difficult to gain a clear insight in the matter.

  10. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.

    PubMed

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ d , and the dictionary is learned from the training data using the vector space structure of ℝ d and its Euclidean L 2 -metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.

  11. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning

    PubMed Central

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝd, and the dictionary is learned from the training data using the vector space structure of ℝd and its Euclidean L2-metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis. PMID:24129583

  12. A Set of Computer Projects for an Electromagnetic Fields Class.

    ERIC Educational Resources Information Center

    Gleeson, Ronald F.

    1989-01-01

    Presented are three computer projects: vector analysis, electric field intensities at various distances, and the Biot-Savart law. Programing suggestions and project results are provided. One month is suggested for each project. (MVL)

  13. Adaptive Hybrid Picture Coding. Volume 2.

    DTIC Science & Technology

    1985-02-01

    ooo5 V.a Measurement Vector ..eho..............57 V.b Size Variable o .entroi* Vector .......... .- 59 V * c Shape Vector .Ř 0-60o oe 6 I V~d...the Program for the Adaptive Line of Sight Method .i.. 18.. o ... .... .... 1 B Details of the Feature Vector FormationProgram .. o ...oo..-....- .122 C ...shape recognition is analogous to recognition of curves in space. Therefore, well known concepts and theorems from differential geometry can be 34 . o

  14. Vehicle Based Vector Sensor

    DTIC Science & Technology

    2015-09-28

    buoyant underwater vehicle with an interior space in which a length of said underwater vehicle is equal to one tenth of the acoustic wavelength...underwater vehicle with an interior space in which a length of said underwater vehicle is equal to one tenth of the acoustic wavelength; an...unmanned underwater vehicle that can function as an acoustic vector sensor. (2) Description of the Prior Art [0004] It is known that a propagating

  15. [New hosts and vectors for genome cloning]. Progress report, 1990--1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The main goal of our project remains the development of new bacterial hosts and vectors for the stable propagation of human DNA clones in E. coli. During the past six months of our current budget period, we have (1) continued to develop new hosts that permit the stable maintenance of unstable features of human DNA, and (2) developed a series of vectors for (a) cloning large DNA inserts, (b) assessing the frequency of human sequences that are lethal to the growth of E. coli, and (c) assessing the stability of human sequences cloned in M13 for large-scale sequencing projects.

  16. The impact of climate change on the geographical distribution of two vectors of Chagas disease: implications for the force of infection.

    PubMed

    Medone, Paula; Ceccarelli, Soledad; Parham, Paul E; Figuera, Andreína; Rabinovich, Jorge E

    2015-04-05

    Chagas disease, caused by the parasite Trypanosoma cruzi, is the most important vector-borne disease in Latin America. The vectors are insects belonging to the Triatominae (Hemiptera, Reduviidae), and are widely distributed in the Americas. Here, we assess the implications of climatic projections for 2050 on the geographical footprint of two of the main Chagas disease vectors: Rhodnius prolixus (tropical species) and Triatoma infestans (temperate species). We estimated the epidemiological implications of current to future transitions in the climatic niche in terms of changes in the force of infection (FOI) on the rural population of two countries: Venezuela (tropical) and Argentina (temperate). The climatic projections for 2050 showed heterogeneous impact on the climatic niches of both vector species, with a decreasing trend of suitability of areas that are currently at high-to-moderate transmission risk. Consequently, climatic projections affected differently the FOI for Chagas disease in Venezuela and Argentina. Despite the heterogeneous results, our main conclusions point out a decreasing trend in the number of new cases of Tr. cruzi human infections per year between current and future conditions using a climatic niche approach. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  17. Radio Observations of the Ionosphere From an Imaging Array and a CubeSat

    NASA Astrophysics Data System (ADS)

    Isham, B.; Gustavsson, B.; Bullett, T. W.; Bergman, J. E. S.; Rincón-Charris, A.; Bruhn, F.; Funk, P.

    2017-12-01

    The ionosphere is a source of many radio emissions in the various low-frequency, medium-frequency, and high-frequency bands (0 to 30 MHz). In addition to natural radio emissions, artificial emissions can be stimulated using high-power radiowave ionospheric modification facilities. Two complementary projects are underway for the purpose of improving our knowledge of the processes of radio emissions from the ionosphere. One project is the Aguadilla radio array, located in northwestern Puerto Rico. The Aguadilla array is intended to produce 2 to 25 MHz radio images of the ionosphere, as well as to perform bistatic radar imaging of the ionosphere over Puerto Rico. The array will consist of multiple antenna elements, each of which is a single active (electromagnetically short) crossed electric dipole. The elements are arranged within a roughly 200 by 300-meter core array, in a semi-random pattern providing an optimal distribution of baseline vectors, with 6-meter minimum spacing to eliminate spacial aliasing. In addition, several elements are arranged in a partial ring around the central core, providing a roughly four times expanded region in u-v space for improved image resolution and quality. Phase is maintained via cabled connections to a central location. A remote array is also being developed, in which phase is maintained between elements by through the use of GPS-disciplined rubidium clocks. The other project involves the GimmeRF radio instrument, designed for 0.3 to 30 MHz vector observation of the radio electric field, and planned for launch in 2020 on a CubeSat. The data rate that can be sustained by GimmeRF far exceeds any available communication strategy. By exploiting fast on-board computing and efficient artificial intelligence (AI) algorithms for analysis and data selection, the usage of the telemetry link can be optimized and value added to the mission. Radio images recorded by the radio array from below the ionosphere can be directly compared with the radio data received by GimmeRF in the topside ionosphere, with the goal of better understanding the geometry and therefore the mechanisms of the radio emission processes.

  18. Killing-Yano tensors in spaces admitting a hypersurface orthogonal Killing vector

    NASA Astrophysics Data System (ADS)

    Garfinkle, David; Glass, E. N.

    2013-03-01

    Methods are presented for finding Killing-Yano tensors, conformal Killing-Yano tensors, and conformal Killing vectors in spacetimes with a hypersurface orthogonal Killing vector. These methods are similar to a method developed by the authors for finding Killing tensors. In all cases one decomposes both the tensor and the equation it satisfies into pieces along the Killing vector and pieces orthogonal to the Killing vector. Solving the separate equations that result from this decomposition requires less computing than integrating the original equation. In each case, examples are given to illustrate the method.

  19. Embedding of multidimensional time-dependent observations.

    PubMed

    Barnard, J P; Aldrich, C; Gerber, M

    2001-10-01

    A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.

  20. Embedding of multidimensional time-dependent observations

    NASA Astrophysics Data System (ADS)

    Barnard, Jakobus P.; Aldrich, Chris; Gerber, Marius

    2001-10-01

    A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.

  1. Foundation Mathematics for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-03-01

    1. Arithmetic and geometry; 2. Preliminary algebra; 3. Differential calculus; 4. Integral calculus; 5. Complex numbers and hyperbolic functions; 6. Series and limits; 7. Partial differentiation; 8. Multiple integrals; 9. Vector algebra; 10. Matrices and vector spaces; 11. Vector calculus; 12. Line, surface and volume integrals; 13. Laplace transforms; 14. Ordinary differential equations; 15. Elementary probability; Appendices; Index.

  2. Student Solution Manual for Foundation Mathematics for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-03-01

    1. Arithmetic and geometry; 2. Preliminary algebra; 3. Differential calculus; 4. Integral calculus; 5. Complex numbers and hyperbolic functions; 6. Series and limits; 7. Partial differentiation; 8. Multiple integrals; 9. Vector algebra; 10. Matrices and vector spaces; 11. Vector calculus; 12. Line, surface and volume integrals; 13. Laplace transforms; 14. Ordinary differential equations; 15. Elementary probability; Appendix.

  3. Lorentz symmetric n-particle systems without ``multiple times''

    NASA Astrophysics Data System (ADS)

    Smith, Felix

    2013-05-01

    The need for multiple times in relativistic n-particle dynamics is a consequence of Minkowski's postulated symmetry between space and time coordinates in a space-time s = [x1 , . . ,x4 ] = [ x , y , z , ict ] , Eq. (1). Poincaré doubted the need for this space-time symmetry, believing Lorentz covariance could also prevail in some geometries with a three-dimensional position space and a quite different time coordinate. The Hubble expansion observed later justifies a specific geometry of this kind, a negatively curved position 3-space expanding with time at the Hubble rate lH (t) =lH , 0 + cΔt (F. T. Smith, Ann. Fond. L. de Broglie, 30, 179 (2005) and 35, 395 (2010)). Its position 4-vector is not s but q = [x1 , . . ,x4 ] = [ x , y , z , ilH (t) ] , and shows no 4-space symmetry. What is observed is always a difference 4-vector Δq = [ Δx , Δy , Δz , icΔt ] , and this displays the structure of Eq. (1) perfectly. Thus we find the standard 4-vector of special relativity in a geometry that does not require a Minkowski space-time at all, but a quite different geometry with a expanding 3-space symmetry and an independent time. The same Lorentz symmetry with but a single time extends to 2 and n-body systems.

  4. Fast metabolite identification with Input Output Kernel Regression.

    PubMed

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-06-15

    An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. celine.brouard@aalto.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  5. Fast metabolite identification with Input Output Kernel Regression

    PubMed Central

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-01-01

    Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628

  6. A link between torse-forming vector fields and rotational hypersurfaces

    NASA Astrophysics Data System (ADS)

    Chen, Bang-Yen; Verstraelen, Leopold

    Torse-forming vector fields introduced by Yano [On torse forming direction in a Riemannian space, Proc. Imp. Acad. Tokyo 20 (1944) 340-346] are natural extension of concurrent and concircular vector fields. Such vector fields have many nice applications to geometry and mathematical physics. In this paper, we establish a link between rotational hypersurfaces and torse-forming vector fields. More precisely, our main result states that, for a hypersurface M of 𝔼n+1 with n ≥ 3, the tangential component xT of the position vector field of M is a proper torse-forming vector field on M if and only if M is contained in a rotational hypersurface whose axis of rotation contains the origin.

  7. Spatial attenuation of different sound field components in a water layer and shallow-water sediments

    NASA Astrophysics Data System (ADS)

    Belov, A. I.; Kuznetsov, G. N.

    2017-11-01

    The paper presents the results of an experimental study of spatial attenuation of low-frequency vector-scalar sound fields in shallow water. The experiments employed a towed pneumatic cannon and spatially separated four-component vector-scalar receiver modules. Narrowband analysis of received signals made it possible to estimate the attenuation coefficients of the first three modes in the frequency of range of 26-182 Hz and calculate the frequency dependences of the sound absorption coefficients in the upper part of bottom sediments. We analyze the experimental and calculated (using acoustic calibration of the waveguide) laws of the drop in sound pressure and orthogonal vector projections of the oscillation velocity. It is shown that the vertical projection of the oscillation velocity vector decreases significantly faster than the sound pressure field.

  8. A finite state projection algorithm for the stationary solution of the chemical master equation.

    PubMed

    Gupta, Ankit; Mikelson, Jan; Khammash, Mustafa

    2017-10-21

    The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash [J. Chem. Phys. 124(4), 044104 (2006)], to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantized tensor-train implementation of our sFSP method, problems admitting more than 100 × 10 6 states can be efficiently solved.

  9. A finite state projection algorithm for the stationary solution of the chemical master equation

    NASA Astrophysics Data System (ADS)

    Gupta, Ankit; Mikelson, Jan; Khammash, Mustafa

    2017-10-01

    The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash [J. Chem. Phys. 124(4), 044104 (2006)], to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantized tensor-train implementation of our sFSP method, problems admitting more than 100 × 106 states can be efficiently solved.

  10. Some remarks on the topology of hyperbolic actions of Rn on n-manifolds

    NASA Astrophysics Data System (ADS)

    Bouloc, Damien

    2017-11-01

    This paper contains some results on the topology of a nondegenerate action of Rn on a compact connected n-manifold M when the action is totally hyperbolic (i.e. its toric degree is zero). We study the R-action generated by a fixed vector of Rn, that provides some results on the number of hyperbolic domains and the number of fixed points of the action. We study with more details the case of the 2-sphere, in particular we investigate some combinatorial properties of the associated 4-valent graph embedded in S2. We also construct hyperbolic actions in dimension 3, on the sphere S3 and on the projective space RP3.

  11. The canonical Lagrangian approach to three-space general relativity

    NASA Astrophysics Data System (ADS)

    Shyam, Vasudev; Venkatesh, Madhavan

    2013-07-01

    We study the action for the three-space formalism of general relativity, better known as the Barbour-Foster-Ó Murchadha action, which is a square-root Baierlein-Sharp-Wheeler action. In particular, we explore the (pre)symplectic structure by pulling it back via a Legendre map to the tangent bundle of the configuration space of this action. With it we attain the canonical Lagrangian vector field which generates the gauge transformations (3-diffeomorphisms) and the true physical evolution of the system. This vector field encapsulates all the dynamics of the system. We also discuss briefly the observables and perennials for this theory. We then present a symplectic reduction of the constrained phase space.

  12. Vector Magnetograph Design

    NASA Technical Reports Server (NTRS)

    Chipman, Russell A.

    1996-01-01

    This report covers work performed during the period of November 1994 through March 1996 on the design of a Space-borne Solar Vector Magnetograph. This work has been performed as part of a design team under the supervision of Dr. Mona Hagyard and Dr. Alan Gary of the Space Science Laboratory. Many tasks were performed and this report documents the results from some of those tasks, each contained in the corresponding appendix. Appendices are organized in chronological order.

  13. The Absolute Vector Magnetometers on Board Swarm, Lessons Learned From Two Years in Space.

    NASA Astrophysics Data System (ADS)

    Hulot, G.; Leger, J. M.; Vigneron, P.; Brocco, L.; Olsen, N.; Jager, T.; Bertrand, F.; Fratter, I.; Sirol, O.; Lalanne, X.

    2015-12-01

    ESA's Swarm satellites carry 4He absolute magnetometers (ASM), designed by CEA-Léti and developed in partnership with CNES. These instruments are the first-ever space-born magnetometers to use a common sensor to simultaneously deliver 1Hz independent absolute scalar and vector readings of the magnetic field. They have provided the very high accuracy scalar field data nominally required by the mission (for both science and calibration purposes, since each satellite also carries a low noise high frequency fluxgate magnetometer designed by DTU), but also very useful experimental absolute vector data. In this presentation, we will report on the status of the instruments, as well as on the various tests and investigations carried out using these experimental data since launch in November 2013. In particular, we will illustrate the advantages of flying ASM instruments on space-born magnetic missions for nominal data quality checks, geomagnetic field modeling and science objectives.

  14. Laplace-Runge-Lenz vector in quantum mechanics in noncommutative space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gáliková, Veronika; Kováčik, Samuel; Prešnajder, Peter

    2013-12-15

    The main point of this paper is to examine a “hidden” dynamical symmetry connected with the conservation of Laplace-Runge-Lenz vector (LRL) in the hydrogen atom problem solved by means of non-commutative quantum mechanics (NCQM). The basic features of NCQM will be introduced to the reader, the key one being the fact that the notion of a point, or a zero distance in the considered configuration space, is abandoned and replaced with a “fuzzy” structure in such a way that the rotational invariance is preserved. The main facts about the conservation of LRL vector in both classical and quantum theory willmore » be reviewed. Finally, we will search for an analogy in the NCQM, provide our results and their comparison with the QM predictions. The key notions we are going to deal with are non-commutative space, Coulomb-Kepler problem, and symmetry.« less

  15. Adjustable vector Airy light-sheet single optical tweezers: negative radiation forces on a subwavelength spheroid and spin torque reversal

    NASA Astrophysics Data System (ADS)

    Mitri, Farid G.

    2018-01-01

    Generalized solutions of vector Airy light-sheets, adjustable per their derivative order m, are introduced stemming from the Lorenz gauge condition and Maxwell's equations using the angular spectrum decomposition method. The Cartesian components of the incident radiated electric, magnetic and time-averaged Poynting vector fields in free space (excluding evanescent waves) are determined and computed with particular emphasis on the derivative order of the Airy light-sheet and the polarization on the magnetic vector potential forming the beam. Negative transverse time-averaged Poynting vector components can arise, while the longitudinal counterparts are always positive. Moreover, the analysis is extended to compute the optical radiation force and spin torque vector components on a lossless dielectric prolate subwavelength spheroid in the framework of the electric dipole approximation. The results show that negative forces and spin torques sign reversal arise depending on the derivative order of the beam, the polarization of the magnetic vector potential, and the orientation of the subwavelength prolate spheroid in space. The spin torque sign reversal suggests that counter-clockwise or clockwise rotations around the center of mass of the subwavelength spheroid can occur. The results find useful applications in single Airy light-sheet tweezers, particle manipulation, handling, and rotation applications to name a few examples.

  16. Distance between RBS and AUG plays an important role in overexpression of recombinant proteins.

    PubMed

    Berwal, Sunil K; Sreejith, R K; Pal, Jayanta K

    2010-10-15

    The spacing between ribosome binding site (RBS) and AUG is crucial for efficient overexpression of genes when cloned in prokaryotic expression vectors. We undertook a brief study on the overexpression of genes cloned in Escherichia coli expression vectors, wherein the spacing between the RBS and the start codon was varied. SDS-PAGE and Western blot analysis indicated a high level of protein expression only in constructs where the spacing between RBS and AUG was approximately 40 nucleotides or more, despite the synthesis of the transcripts in the representative cases investigated. Copyright 2010 Elsevier Inc. All rights reserved.

  17. Predicting disulfide connectivity from protein sequence using multiple sequence feature vectors and secondary structure.

    PubMed

    Song, Jiangning; Yuan, Zheng; Tan, Hao; Huber, Thomas; Burrage, Kevin

    2007-12-01

    Disulfide bonds are primary covalent crosslinks between two cysteine residues in proteins that play critical roles in stabilizing the protein structures and are commonly found in extracy-toplasmatic or secreted proteins. In protein folding prediction, the localization of disulfide bonds can greatly reduce the search in conformational space. Therefore, there is a great need to develop computational methods capable of accurately predicting disulfide connectivity patterns in proteins that could have potentially important applications. We have developed a novel method to predict disulfide connectivity patterns from protein primary sequence, using a support vector regression (SVR) approach based on multiple sequence feature vectors and predicted secondary structure by the PSIPRED program. The results indicate that our method could achieve a prediction accuracy of 74.4% and 77.9%, respectively, when averaged on proteins with two to five disulfide bridges using 4-fold cross-validation, measured on the protein and cysteine pair on a well-defined non-homologous dataset. We assessed the effects of different sequence encoding schemes on the prediction performance of disulfide connectivity. It has been shown that the sequence encoding scheme based on multiple sequence feature vectors coupled with predicted secondary structure can significantly improve the prediction accuracy, thus enabling our method to outperform most of other currently available predictors. Our work provides a complementary approach to the current algorithms that should be useful in computationally assigning disulfide connectivity patterns and helps in the annotation of protein sequences generated by large-scale whole-genome projects. The prediction web server and Supplementary Material are accessible at http://foo.maths.uq.edu.au/~huber/disulfide

  18. Uncertainty quantification for complex systems with very high dimensional response using Grassmann manifold variations

    NASA Astrophysics Data System (ADS)

    Giovanis, D. G.; Shields, M. D.

    2018-07-01

    This paper addresses uncertainty quantification (UQ) for problems where scalar (or low-dimensional vector) response quantities are insufficient and, instead, full-field (very high-dimensional) responses are of interest. To do so, an adaptive stochastic simulation-based methodology is introduced that refines the probability space based on Grassmann manifold variations. The proposed method has a multi-element character discretizing the probability space into simplex elements using a Delaunay triangulation. For every simplex, the high-dimensional solutions corresponding to its vertices (sample points) are projected onto the Grassmann manifold. The pairwise distances between these points are calculated using appropriately defined metrics and the elements with large total distance are sub-sampled and refined. As a result, regions of the probability space that produce significant changes in the full-field solution are accurately resolved. An added benefit is that an approximation of the solution within each element can be obtained by interpolation on the Grassmann manifold. The method is applied to study the probability of shear band formation in a bulk metallic glass using the shear transformation zone theory.

  19. Development of fluxgate magnetometers and applications to the space science missions

    NASA Astrophysics Data System (ADS)

    Matsuoka, A.; Shinohara, M.; Tanaka, Y.-M.; Fujimoto, A.; Iguchi, K.

    2013-11-01

    Magnetic field is one of the essential physical parameters to study the space physics and evolution of the solar system. There are several methods to measure the magnetic field in the space by spacecraft and rockets. Fluxgate magnetometer has been most generally used out of them because it measures the vector field accurately and does not need much weight and power budgets. When we try more difficult missions such as multi-satellite observation, landing on the celestial body and exploration in the area of severe environment, we have to modify the magnetometer or develop new techniques to make the instrument adequate for those projects. For example, we developed a 20-bit delta-sigma analogue-to-digital converter for MGF-I on the BepiColombo MMO satellite, to achieve the wide-range (±2000 nT) measurement with good resolution in the high radiation environment. For further future missions, we have examined the digitalizing of the circuit, which has much potential to drastically reduce the instrument weight, power consumption and performance dependence on the temperature.

  20. The ability to use remote polarization studies of Earth from space in the national economy (Project of the onboard filter panoramic polarimeter)

    NASA Astrophysics Data System (ADS)

    Nevodovskyi, P. V.; Vidmachenko, A. P.; Geraimchuk, M. D.; Ivahiv, O. V.

    2016-10-01

    Remote polarization studies are a very powerful method of solving of astronomical tasks with the study of the physical properties of the planets and their atmospheres. It allows research of objects as in situ, so in vitro, and has many other advantages. To use this method has already been developed and are continuing the development of many various special instruments called polarimeters. The essence of the space experiment consists in the fact to using the polarimeter installed on board a micro satellite, systematically, during each of its revolutions around the Earth, to monitor a polarization components of a diffusely reflected by atmosphere solar radiation in different, previously stipulated wavelengths. There are a lot of the optical schemes, which are used in devices of measuring and monitoring of parameters of polarized radiation, including those into space experiments of probing of the Earth's atmosphere from orbit of satellites. In this paper we analyze the potential for use of filter-polarimeter to measure the Stokes vector components.

  1. Wavemill Product Assessment- Defining Products and Evaluating Potential Performance from a Novel Spaceborne Interferometric SAR

    NASA Astrophysics Data System (ADS)

    Cotton, P. D.; Gommenginger, C.; Martin, A.; Marquez, J.; Burbidge, G.; Quilfen, Y.; Chapron, B.; Reppucci, A.; Buck, C.

    2016-08-01

    Ocean Surface Currents are one of the most important ocean properties for oceanographers and operators in the maritime domain. Improved monitoring of ocean currents is systematically the number one requirement that emerges from any science or end user requirement surveys.Wavemill is a novel hybrid interferometric SAR system first proposed by ESA/ESTEC [Buck, 2005]. It offers the possibility of generating two-dimensional wide swath, high resolution, high precision maps of surface current vectors and ocean topography [Buck et al., 2009]. Based on a single spacecraft, it avoids the difficulties of synchronisation and baseline estimation associated with other interferometric SAR systems based on two or more satellites (e.g. the "cartwheel" or "helix" concept).The Wavemill concept has developed steadily since its first inception in 2005. A number of Wavemill studies in recent years have gradually put together facts and figures to support the case for Wavemill as a possible space-borne mission.The Wavemill Product Assessment study (WaPA) aimed to define the scientific capabilities and limitations of a spaceborne Wavemill instrument in preparation for a possible submission of the Wavemill concept as a candidate Earth Explorer Core mission. The WaPA project team brought together expert scientists and engineers in the field of SAR imaging of ocean currents, and included the National Oceanography Centre (UK), Starlab (Spain), IFREMER (France) and Airbus Defence and Space (UK). Overall project management was provided by Satellite Oceanographic Consultants (UK). The approach taken included:- A review of SAR imaging of ocean currents in along-track interferometric mode to learn from previous experiments and modelling what key phenomena need to be accounted for to determine the true performance of a spaceborne Wavemill system- Validation of proposed Wavemill primary products based on Wavemill airborne proof-of-concept data and numerical simulations to determine the capabilities and limitations of a spaceborne Wavemill instrument for ocean current vector and sea surface height mapping.- An analysis of the potential for ocean wind vector retrieval from a spaceborne Wavemill instrument.- An investigation of possible secondary products from Wavemill relating to rivers, ocean/atmosphere interactions, ocean swell and cryospheric applications.An assessment of the synergy between Wavemill and ocean surface current products derived from other remote sensing techniques, accounting for the nature and variability of the measured properties, to identify any additional requirements on a future Wavemill mission.

  2. Principal component analysis and the locus of the Fréchet mean in the space of phylogenetic trees.

    PubMed

    Nye, Tom M W; Tang, Xiaoxian; Weyenberg, Grady; Yoshida, Ruriko

    2017-12-01

    Evolutionary relationships are represented by phylogenetic trees, and a phylogenetic analysis of gene sequences typically produces a collection of these trees, one for each gene in the analysis. Analysis of samples of trees is difficult due to the multi-dimensionality of the space of possible trees. In Euclidean spaces, principal component analysis is a popular method of reducing high-dimensional data to a low-dimensional representation that preserves much of the sample's structure. However, the space of all phylogenetic trees on a fixed set of species does not form a Euclidean vector space, and methods adapted to tree space are needed. Previous work introduced the notion of a principal geodesic in this space, analogous to the first principal component. Here we propose a geometric object for tree space similar to the [Formula: see text]th principal component in Euclidean space: the locus of the weighted Fréchet mean of [Formula: see text] vertex trees when the weights vary over the [Formula: see text]-simplex. We establish some basic properties of these objects, in particular showing that they have dimension [Formula: see text], and propose algorithms for projection onto these surfaces and for finding the principal locus associated with a sample of trees. Simulation studies demonstrate that these algorithms perform well, and analyses of two datasets, containing Apicomplexa and African coelacanth genomes respectively, reveal important structure from the second principal components.

  3. Algebraic and radical potential fields. Stability domains in coordinate and parametric space

    NASA Astrophysics Data System (ADS)

    Uteshev, Alexei Yu.

    2018-05-01

    A dynamical system d X/d t = F(X; A) is treated where F(X; A) is a polynomial (or some general type of radical contained) function in the vectors of state variables X ∈ ℝn and parameters A ∈ ℝm. We are looking for stability domains in both spaces, i.e. (a) domain ℙ ⊂ ℝm such that for any parameter vector specialization A ∈ ℙ, there exists a stable equilibrium for the dynamical system, and (b) domain 𝕊 ⊂ ℝn such that any point X* ∈ 𝕊 could be made a stable equilibrium by a suitable specialization of the parameter vector A.

  4. Enhanced secure 4-D modulation space optical multi-carrier system based on joint constellation and Stokes vector scrambling.

    PubMed

    Liu, Bo; Zhang, Lijia; Xin, Xiangjun

    2018-03-19

    This paper proposes and demonstrates an enhanced secure 4-D modulation optical generalized filter bank multi-carrier (GFBMC) system based on joint constellation and Stokes vector scrambling. The constellation and Stokes vectors are scrambled by using different scrambling parameters. A multi-scroll Chua's circuit map is adopted as the chaotic model. Large secure key space can be obtained due to the multi-scroll attractors and independent operability of subcarriers. A 40.32Gb/s encrypted optical GFBMC signal with 128 parallel subcarriers is successfully demonstrated in the experiment. The results show good resistance against the illegal receiver and indicate a potential way for the future optical multi-carrier system.

  5. Regularized estimation of Euler pole parameters

    NASA Astrophysics Data System (ADS)

    Aktuğ, Bahadir; Yildirim, Ömer

    2013-07-01

    Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.

  6. Terrorism/Criminalogy/Sociology via Magnetism-Hamiltonian ``Models''?!: Black Swans; What Secrets Lie Buried in Magnetism?; ``Magnetism Will Conquer the Universe?''(Charles Middleton, aka ``His Imperial Majesty The Emperior Ming `The Merciless!!!''

    NASA Astrophysics Data System (ADS)

    Carrott, Anthony; Siegel, Edward Carl-Ludwig; Hoover, John-Edgar; Ness, Elliott

    2013-03-01

    Terrorism/Criminalogy//Sociology : non-Linear applied-mathematician (``nose-to-the grindstone / ``gearheadism'') ''modelers'': Worden,, Short, ...criminologists/counter-terrorists/sociologists confront [SIAM Conf. on Nonlinearity, Seattle(12); Canadian Sociology Conf,. Burnaby(12)]. ``The `Sins' of the Fathers Visited Upon the Sons'': Zeno vs Ising vs Heisenberg vs Stoner vs Hubbard vs Siegel ''SODHM''(But NO Y!!!) vs ...??? Magntism and it turn are themselves confronted BY MAGNETISM,via relatively magnetism/metal-insulator conductivity / percolation-phase-transitions critical-phenomena -illiterate non-linear applied-mathematician (nose-to-the-grindstone/ ``gearheadism'')''modelers''. What Secrets Lie Buried in Magnetism?; ``Magnetism Will Conquer the Universe!!!''[Charles Middleton, aka ``His Imperial Majesty The Emperior Ming `The Merciless!!!']'' magnetism-Hamiltonian phase-transitions percolation-``models''!: Zeno(~2350 BCE) to Peter the Pilgrim(1150) to Gilbert(1600) to Faraday(1815-1820) to Tate (1870-1880) to Ewing(1882) hysteresis to Barkhausen(1885) to Curie(1895)-Weiss(1895) to Ising-Lenz(r-space/Localized-Scalar/ Discrete/1911) to Heisenberg(r-space/localized-vector/discrete/1927) to Priesich(1935) to Stoner (electron/k-space/ itinerant-vector/discrete/39) to Stoner-Wohlfarth (technical-magnetism hysteresis /r-space/ itinerant-vector/ discrete/48) to Hubbard-Longuet-Higgins (k-space versus r-space/

  7. Space Science

    NASA Image and Video Library

    1990-10-01

    Using the Solar Vector Magnetograph, a solar observation facility at NASA's Marshall Space Flight Center (MSFC), scientists from the National Space Science and Technology Center (NSSTC) in Huntsville, Alabama, are monitoring the explosive potential of magnetic areas of the Sun. This effort could someday lead to better prediction of severe space weather, a phenomenon that occurs when blasts of particles and magnetic fields from the Sun impact the magnetosphere, the magnetic bubble around the Earth. When massive solar explosions, known as coronal mass ejections, blast through the Sun's outer atmosphere and plow toward Earth at speeds of thousands of miles per second, the resulting effects can be harmful to communication satellites and astronauts outside the Earth's magnetosphere. Like severe weather on Earth, severe space weather can be costly. On the ground, the magnetic storm wrought by these solar particles can knock out electric power. The researchers from MSFC and NSSTC's solar physics group develop instruments for measuring magnetic fields on the Sun. With these instruments, the group studies the origin, structure, and evolution of the solar magnetic field and the impact it has on Earth's space environment. This photograph shows the Solar Vector Magnetograph and Dr. Mona Hagyard of MSFC, the director of the observatory who leads the development, operation and research program of the Solar Vector Magnetograph.

  8. The organization of conspecific face space in nonhuman primates

    PubMed Central

    Parr, Lisa A.; Taubert, Jessica; Little, Anthony C.; Hancock, Peter J. B.

    2013-01-01

    Humans and chimpanzees demonstrate numerous cognitive specializations for processing faces, but comparative studies with monkeys suggest that these may be the result of recent evolutionary adaptations. The present study utilized the novel approach of face space, a powerful theoretical framework used to understand the representation of face identity in humans, to further explore species differences in face processing. According to the theory, faces are represented by vectors in a multidimensional space, the centre of which is defined by an average face. Each dimension codes features important for describing a face’s identity, and vector length codes the feature’s distinctiveness. Chimpanzees and rhesus monkeys discriminated male and female conspecifics’ faces, rated by humans for their distinctiveness, using a computerized task. Multidimensional scaling analyses showed that the organization of face space was similar between humans and chimpanzees. Distinctive faces had the longest vectors and were the easiest for chimpanzees to discriminate. In contrast, distinctiveness did not correlate with the performance of rhesus monkeys. The feature dimensions for each species’ face space were visualized and described using morphing techniques. These results confirm species differences in the perceptual representation of conspecific faces, which are discussed within an evolutionary framework. PMID:22670823

  9. Method and system for efficient video compression with low-complexity encoder

    NASA Technical Reports Server (NTRS)

    Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)

    2012-01-01

    Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.

  10. Polarization-analyzing circuit on InP for integrated Stokes vector receiver.

    PubMed

    Ghosh, Samir; Kawabata, Yuto; Tanemura, Takuo; Nakano, Yoshiaki

    2017-05-29

    Stokes vector modulation and direct detection (SVM/DD) has immense potentiality to reduce the cost burden for the next-generation short-reach optical communication networks. In this paper, we propose and demonstrate an InGaAsP/InP waveguide-based polarization-analyzing circuit for an integrated Stokes vector (SV) receiver. By transforming the input state-of-polarization (SOP) and projecting its SV onto three different vectors on the Poincare sphere, we show that the actual SOP can be retrieved by simple calculation. We also reveal that this projection matrix has a flexibility and its deviation due to device imperfectness can be calibrated to a certain degree, so that the proposed device would be fundamentally robust against fabrication errors. A proof-of-concept photonic integrated circuit (PIC) is fabricated on InP by using half-ridge waveguides to successfully demonstrate detection of different SOPs scattered on the Poincare sphere.

  11. Applications of Aerodynamic Forces for Spacecraft Orbit Maneuverability in Operationally Responsive Space and Space Reconstitution Needs

    DTIC Science & Technology

    2012-03-01

    observation re = the radius of the Earth at the equator Pn = the Legendre polynomial 26 L = the geocentric latitude, sin The acceleration can then...atmospheric density at an altitude above an %% oblate earth given the position vector in the Geocentric Equatorial %% frame. The position vector is in...Diff between Delta and Geocentric lat rad %% GeoDtLat - Geodetic Latitude -Pi/2 to Pi/2 rad %% GeoCnLat

  12. Pure state consciousness and its local reduction to neuronal space

    NASA Astrophysics Data System (ADS)

    Duggins, A. J.

    2013-01-01

    The single neuronal state can be represented as a vector in a complex space, spanned by an orthonormal basis of integer spike counts. In this model a scalar element of experience is associated with the instantaneous firing rate of a single sensory neuron over repeated stimulus presentations. Here the model is extended to composite neural systems that are tensor products of single neuronal vector spaces. Depiction of the mental state as a vector on this tensor product space is intended to capture the unity of consciousness. The density operator is introduced as its local reduction to the single neuron level, from which the firing rate can again be derived as the objective correlate of a subjective element. However, the relational structure of perceptual experience only emerges when the non-local mental state is considered. A metric of phenomenal proximity between neuronal elements of experience is proposed, based on the cross-correlation function of neurophysiology, but constrained by the association of theoretical extremes of correlation/anticorrelation in inseparable 2-neuron states with identical and opponent elements respectively.

  13. Analysis of Human-Spacesuit Interaction

    NASA Technical Reports Server (NTRS)

    Thomas, Neha

    2015-01-01

    Astronauts sustain injuries of various natures such as finger delamination, joint pain, and redness due to their interaction with the space suit. The role of the Anthropometry and Biomechanics Facility is to understand the biomechanics, environmental variables, and ergonomics of the suit. This knowledge is then used to make suggestions for improvement in future iterations of the space suit assembly to prevent injuries while allowing astronauts maneuverability, comfort, and tactility. The projects I was involved in were the Extravehicular Mobility Unit (EMU) space suit stiffness study and the glove feasibility study. The EMU project looked at the forces exerted on the shoulder, arm, and wrist when subjects performed kinematic tasks with and without a pressurized suit. The glove study consisted of testing three conditions - the Series 4000 glove, the Phase VI glove, and the no glove condition. With more than forty channels of sensor data total, it was critical to develop programs that could analyze data with basic descriptive statistics and generate relevant graphs to help understand what happens within the space suit and glove. In my project I created a Graphical User Interface (GUI) in MATLAB that would help me visualize what each sensor was doing within a task. The GUI is capable of displaying overlain plots and can be synchronized with video. This was helpful during the stiffness testing to visualize how the forces on the arm acted while the subject performed tasks such as shoulder adduction/abduction and bicep curls. The main project of focus, however, was the glove comparison study. I wrote MATLAB programs which generated movies of the strain vectors during specific tasks. I also generated graphs that summarized the differences between each glove for the strain, shear and FSR sensors. Preliminary results indicate that the Phase VI glove places less strain and shear on the hand. Future work includes continued data analysis of surveys and sensor data. In the end, the ideal glove is one that provides more tactility for the astronauts but lessens injuries. Often times, a more tactile glove transmits forces better to the hand; thus, achieving a balance of both a tactile and safe glove is the main challenge present.

  14. The NEUF-DIX space project - Non-EquilibriUm Fluctuations during DIffusion in compleX liquids.

    PubMed

    Baaske, Philipp; Bataller, Henri; Braibanti, Marco; Carpineti, Marina; Cerbino, Roberto; Croccolo, Fabrizio; Donev, Aleksandar; Köhler, Werner; Ortiz de Zárate, José M; Vailati, Alberto

    2016-12-01

    Diffusion and thermal diffusion processes in a liquid mixture are accompanied by long-range non-equilibrium fluctuations, whose amplitude is orders of magnitude larger than that of equilibrium fluctuations. The mean-square amplitude of the non-equilibrium fluctuations presents a scale-free power law behavior q -4 as a function of the wave vector q, but the divergence of the amplitude of the fluctuations at small wave vectors is prevented by the presence of gravity. In microgravity conditions the non-equilibrium fluctuations are fully developed and span all the available length scales up to the macroscopic size of the systems in the direction parallel to the applied gradient. Available theoretical models are based on linearized hydrodynamics and provide an adequate description of the statics and dynamics of the fluctuations in the presence of small temperature/concentration gradients and under stationary or quasi-stationary conditions. We describe a project aimed at the investigation of Non-EquilibriUm Fluctuations during DIffusion in compleX liquids (NEUF-DIX). The focus of the project is on the investigation in micro-gravity conditions of the non-equilibrium fluctuations in complex liquids, trying to tackle several challenging problems that emerged during the latest years, such as the theoretical predictions of Casimir-like forces induced by non-equilibrium fluctuations; the understanding of the non-equilibrium fluctuations in multi-component mixtures including a polymer, both in relation to the transport coefficients and to their behavior close to a glass transition; the understanding of the non-equilibrium fluctuations in concentrated colloidal suspensions, a problem closely related with the detection of Casimir forces; and the investigation of the development of fluctuations during transient diffusion. We envision to parallel these experiments with state-of-the-art multi-scale simulations.

  15. Illustrating dynamical symmetries in classical mechanics: The Laplace-Runge-Lenz vector revisited

    NASA Astrophysics Data System (ADS)

    O'Connell, Ross C.; Jagannathan, Kannan

    2003-03-01

    The inverse square force law admits a conserved vector that lies in the plane of motion. This vector has been associated with the names of Laplace, Runge, and Lenz, among others. Many workers have explored aspects of the symmetry and degeneracy associated with this vector and with analogous dynamical symmetries. We define a conserved dynamical variable α that characterizes the orientation of the orbit in two-dimensional configuration space for the Kepler problem and an analogous variable β for the isotropic harmonic oscillator. This orbit orientation variable is canonically conjugate to the angular momentum component normal to the plane of motion. We explore the canonical one-parameter group of transformations generated by α(β). Because we have an obvious pair of conserved canonically conjugate variables, it is desirable to use them as a coordinate-momentum pair. In terms of these phase space coordinates, the form of the Hamiltonian is nearly trivial because neither member of the pair can occur explicitly in the Hamiltonian. From these considerations we gain a simple picture of dynamics in phase space. The procedure we use is in the spirit of the Hamilton-Jacobi method.

  16. America's Next Great Ship: Space Launch System Core Stage Transitioning from Design to Manufacturing

    NASA Technical Reports Server (NTRS)

    Birkenstock, Benjamin; Kauer, Roy

    2014-01-01

    The Space Launch System (SLS) Program is essential to achieving the Nation's and NASA's goal of human exploration and scientific investigation of the solar system. As a multi-element program with emphasis on safety, affordability, and sustainability, SLS is becoming America's next great ship of exploration. The SLS Core Stage includes avionics, main propulsion system, pressure vessels, thrust vector control, and structures. Boeing manufactures and assembles the SLS core stage at the Michoud Assembly Facility (MAF) in New Orleans, LA, a historical production center for Saturn V and Space Shuttle programs. As the transition from design to manufacturing progresses, the importance of a well-executed manufacturing, assembly, and operation (MA&O) plan is crucial to meeting performance objectives. Boeing employs classic techniques such as critical path analysis and facility requirements definition as well as innovative approaches such as Constraint Based Scheduling (CBS) and Cirtical Chain Project Management (CCPM) theory to provide a comprehensive suite of project management tools to manage the health of the baseline plan on both a macro (overall project) and micro level (factory areas). These tools coordinate data from multiple business systems and provide a robust network to support Material & Capacity Requirements Planning (MRP/CRP) and priorities. Coupled with these tools and a highly skilled workforce, Boeing is orchestrating the parallel buildup of five major sub assemblies throughout the factory. Boeing and NASA are transforming MAF to host state of the art processes, equipment and tooling, the most prominent of which is the Vertical Assembly Center (VAC), the largest weld tool in the world. In concert, a global supply chain is delivering a range of structural elements and component parts necessary to enable an on-time delivery of the integrated Core Stage. SLS is on plan to launch humanity into the next phase of space exploration.

  17. Orion Exploration Flight Test-1 Contingency Drogue Deploy Velocity Trigger

    NASA Technical Reports Server (NTRS)

    Gay, Robert S.; Stochowiak, Susan; Smith, Kelly

    2013-01-01

    As a backup to the GPS-aided Kalman filter and the Barometric altimeter, an "adjusted" velocity trigger is used during entry to trigger the chain of events that leads to drogue chute deploy for the Orion Multi-Purpose Crew Vehicle (MPCV) Exploration Flight Test-1 (EFT-1). Even though this scenario is multiple failures deep, the Orion Guidance, Navigation, and Control (GN&C) software makes use of a clever technique that was taken from the Mars Science Laboratory (MSL) program, which recently successfully landing the Curiosity rover on Mars. MSL used this technique to jettison the heat shield at the proper time during descent. Originally, Orion use the un-adjusted navigated velocity, but the removal of the Star Tracker to save costs for EFT-1, increased attitude errors which increased inertial propagation errors to the point where the un-adjusted velocity caused altitude dispersions at drogue deploy to be too large. Thus, to reduce dispersions, the velocity vector is projected onto a "reference" vector that represents the nominal "truth" vector at the desired point in the trajectory. Because the navigation errors are largely perpendicular to the truth vector, this projection significantly reduces dispersions in the velocity magnitude. This paper will detail the evolution of this trigger method for the Orion project and cover the various methods tested to determine the reference "truth" vector; and at what point in the trajectory it should be computed.

  18. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3) this paper adopts the Gauss Elimination, one of the on-the-shelf techniques, to generate a basis of the original feature space, which is stable and efficient.

  19. The EISCAT_3D Project in Norway: E3DN

    NASA Astrophysics Data System (ADS)

    La Hoz, C.; Oksavik, K.

    2013-12-01

    EISCAT_3D (E3D) is a project to build the next generation of incoherent scatter radars endowed with 3-dimensional scalar and vector capabilities that will replace the current EISCAT radars in Northern Scandinavia. One active (transmitting) site in Norway and four passive (receiving) sites in the Nordic countries will provide 3-D vector imaging capabilities by rapid scanning and multi-beam forming. The unprecedented flexibility of the solid-state transmitter with high duty-cycle, arbitrary wave-forming and polarisation and its pulsed power of 10 MW will provide unrivalled experimental capabilities to investigate the highly non-stationary and non-homogeneous state of the polar upper atmosphere. Aperture Synthesis Imaging Radar (ASIR) will to endow E3D with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. The Norwegian scientific programme is inspired by the pioneer polar scientist Kristian Birkeland (picture) and includes pressing questions on polar upper atmospheric research, among others: (Q1) How to proceed beyond the present simplistic, static, stationary and homogeneous analysis of upper atmospheric and ionospheric processes? (Q2) How does space weather affect ionospheric processes and how to support modelling and space weather services? (Q3) How to advance fundamental plasma physics by employing the ionosphere as a natural plasma physics laboratory? (Q4) How does the influx of extraterrestrial material interact with the upper atmosphere and where does the material originate from? (Q5) How does solar activity couple from geospace into the lower atmosphere and climate system, and does this energy change the wave forcing of geospace from below? Kristian Birkeland, Norwegian scientist and pioneer in polar and auroral research.

  20. A NEW METHOD TO QUANTIFY AND REDUCE THE NET PROJECTION ERROR IN WHOLE-SOLAR-ACTIVE-REGION PARAMETERS MEASURED FROM VECTOR MAGNETOGRAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.

    Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less

  1. Unitary Operators on the Document Space.

    ERIC Educational Resources Information Center

    Hoenkamp, Eduard

    2003-01-01

    Discusses latent semantic indexing (LSI) that would allow search engines to reduce the dimension of the document space by mapping it into a space spanned by conceptual indices. Topics include vector space models; singular value decomposition (SVD); unitary operators; the Haar transform; and new algorithms. (Author/LRW)

  2. Feature extraction through parallel Probabilistic Principal Component Analysis for heart disease diagnosis

    NASA Astrophysics Data System (ADS)

    Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan

    2017-09-01

    Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.

  3. Malaria vectors in South America: current and future scenarios.

    PubMed

    Laporta, Gabriel Zorello; Linton, Yvonne-Marie; Wilkerson, Richard C; Bergo, Eduardo Sterlino; Nagaki, Sandra Sayuri; Sant'Ana, Denise Cristina; Sallum, Maria Anice Mureb

    2015-08-19

    Malaria remains a significant public health issue in South America. Future climate change may influence the distribution of the disease, which is dependent on the distribution of those Anopheles mosquitoes competent to transmit Plasmodium falciparum. Herein, predictive niche models of the habitat suitability for P. falciparum, the current primary vector Anopheles darlingi and nine other known and/or potential vector species of the Neotropical Albitarsis Complex, were used to document the current situation and project future scenarios under climate changes in South America in 2070. To build each ecological niche model, we employed topography, climate and biome, and the currently defined distribution of P. falciparum, An. darlingi and nine species comprising the Albitarsis Complex in South America. Current and future (i.e., 2070) distributions were forecast by projecting the fitted ecological niche model onto the current environmental situation and two scenarios of simulated climate change. Statistical analyses were performed between the parasite and each vector in both the present and future scenarios to address potential vector roles in the dynamics of malaria transmission. Current distributions of malaria vector species were associated with that of P. falciparum, confirming their role in transmission, especially An. darlingi, An. marajoara and An. deaneorum. Projected climate changes included higher temperatures, lower water availability and biome modifications. Regardless of future scenarios considered, the geographic distribution of P. falciparum was exacerbated in 2070 South America, with the distribution of the pathogen covering 35-46% of the continent. As the current primary vector An. darlingi showed low tolerance for drier environments, the projected climate change would significantly reduce suitable habitat, impacting both its distribution and abundance. Conversely, climate generalist members of the Albitarsis Complex showed significant spatial and temporal expansion potential in 2070, and we conclude these species will become more important in the dynamics of malaria transmission in South America. Our data suggest that climate and landscape effects will elevate the importance of members of the Albitarsis Complex in malaria transmission in South America in 2070, highlighting the need for further studies addressing the bionomics, ecology and behaviours of the species comprising the Albitarsis Complex.

  4. Time-based self-spacing techniques using cockpit display of traffic information during approach to landing in a terminal area vectoring environment

    NASA Technical Reports Server (NTRS)

    Williams, D. H.

    1983-01-01

    A simulation study was undertaken to evaluate two time-based self-spacing techniques for in-trail following during terminal area approach. An electronic traffic display was provided in the weather radarscope location. The displayed self-spacing cues allowed the simulated aircraft to follow and to maintain spacing on another aircraft which was being vectored by air traffic control (ATC) for landing in a high-density terminal area. Separation performance data indicate the information provided on the traffic display was adequate for the test subjects to accurately follow the approach path of another aircraft without the assistance of ATC. The time-based technique with a constant-delay spacing criterion produced the most satisfactory spacing performance. Pilot comments indicate the workload associated with the self-separation task was very high and that additional spacing command information and/or aircraft autopilot functions would be desirable for operational implementational of the self-spacing task.

  5. Hypercyclic subspaces for Frechet space operators

    NASA Astrophysics Data System (ADS)

    Petersson, Henrik

    2006-07-01

    A continuous linear operator is hypercyclic if there is an such that the orbit {Tnx} is dense, and such a vector x is said to be hypercyclic for T. Recent progress show that it is possible to characterize Banach space operators that have a hypercyclic subspace, i.e., an infinite dimensional closed subspace of, except for zero, hypercyclic vectors. The following is known to hold: A Banach space operator T has a hypercyclic subspace if there is a sequence (ni) and an infinite dimensional closed subspace such that T is hereditarily hypercyclic for (ni) and Tni->0 pointwise on E. In this note we extend this result to the setting of Frechet spaces that admit a continuous norm, and study some applications for important function spaces. As an application we also prove that any infinite dimensional separable Frechet space with a continuous norm admits an operator with a hypercyclic subspace.

  6. A space-efficient quantum computer simulator suitable for high-speed FPGA implementation

    NASA Astrophysics Data System (ADS)

    Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel

    2009-05-01

    Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.

  7. Left ventricular hypertrophy index based on a combination of frontal and transverse planes in the ECG and VCG: Diagnostic utility of cardiac vectors

    NASA Astrophysics Data System (ADS)

    Bonomini, Maria Paula; Juan Ingallina, Fernando; Barone, Valeria; Antonucci, Ricardo; Valentinuzzi, Max; Arini, Pedro David

    2016-04-01

    The changes that left ventricular hypertrophy (LVH) induces in depolarization and repolarization vectors are well known. We analyzed the performance of the electrocardiographic and vectorcardiographic transverse planes (TP in the ECG and XZ in the VCG) and frontal planes (FP in the ECG and XY in the VCG) to discriminate LVH patients from control subjects. In an age-balanced set of 58 patients, the directions and amplitudes of QRS-complexes and T-wave vectors were studied. The repolarization vector significantly decreased in modulus from controls to LVH in the transverse plane (TP: 0.45±0.17mV vs. 0.24±0.13mV, p<0.0005 XZ: 0.43±0.16mV vs. 0.26±0.11mV, p<0.005) while the depolarization vector significantly changed in angle in the electrocardiographic frontal plane (Controls vs. LVH, FP: 48.24±33.66° vs. 46.84±35.44°, p<0.005, XY: 20.28±35.20° vs. 19.35±12.31°, NS). Several LVH indexes were proposed combining such information in both ECG and VCG spaces. A subset of all those indexes with AUC values greater than 0.7 was further studied. This subset comprised four indexes, with three of them belonging to the ECG space. Two out of the four indexes presented the best ROC curves (AUC values: 0.78 and 0.75, respectively). One index belonged to the ECG space and the other one to the VCG space. Both indexes showed a sensitivity of 86% and a specificity of 70%. In conclusion, the proposed indexes can favorably complement LVH diagnosis

  8. Teaching Vectors Through an Interactive Game Based Laboratory

    NASA Astrophysics Data System (ADS)

    O'Brien, James; Sirokman, Gergely

    2014-03-01

    In recent years, science and particularly physics education has been furthered by the use of project based interactive learning [1]. There is a tremendous amount of evidence [2] that use of these techniques in a college learning environment leads to a deeper appreciation and understanding of fundamental concepts. Since vectors are the basis for any advancement in physics and engineering courses the cornerstone of any physics regimen is a concrete and comprehensive introduction to vectors. Here, we introduce a new turn based vector game that we have developed to help supplement traditional vector learning practices, which allows students to be creative, work together as a team, and accomplish a goal through the understanding of basic vector concepts.

  9. Covariantized vector Galileons

    NASA Astrophysics Data System (ADS)

    Hull, Matthew; Koyama, Kazuya; Tasinato, Gianmassimo

    2016-03-01

    Vector Galileons are ghost-free systems containing higher derivative interactions of vector fields. They break the vector gauge symmetry, and the dynamics of the longitudinal vector polarizations acquire a Galileon symmetry in an appropriate decoupling limit in Minkowski space. Using an Arnowitt-Deser-Misner approach, we carefully reconsider the coupling with gravity of vector Galileons, with the aim of studying the necessary conditions to avoid the propagation of ghosts. We develop arguments that put on a more solid footing the results previously obtained in the literature. Moreover, working in analogy with the scalar counterpart, we find indications for the existence of a "beyond Horndeski" theory involving vector degrees of freedom that avoids the propagation of ghosts thanks to secondary constraints. In addition, we analyze a Higgs mechanism for generating vector Galileons through spontaneous symmetry breaking, and we present its consistent covariantization.

  10. Closedness of orbits in a space with SU(2) Poisson structure

    NASA Astrophysics Data System (ADS)

    Fatollahi, Amir H.; Shariati, Ahmad; Khorrami, Mohammad

    2014-06-01

    The closedness of orbits of central forces is addressed in a three-dimensional space in which the Poisson bracket among the coordinates is that of the SU(2) Lie algebra. In particular it is shown that among problems with spherically symmetric potential energies, it is only the Kepler problem for which all bounded orbits are closed. In analogy with the case of the ordinary space, a conserved vector (apart from the angular momentum) is explicitly constructed, which is responsible for the orbits being closed. This is the analog of the Laplace-Runge-Lenz vector. The algebra of the constants of the motion is also worked out.

  11. Application of information-retrieval methods to the classification of physical data

    NASA Technical Reports Server (NTRS)

    Mamotko, Z. N.; Khorolskaya, S. K.; Shatrovskiy, L. I.

    1975-01-01

    Scientific data received from satellites are characterized as a multi-dimensional time series, whose terms are vector functions of a vector of measurement conditions. Information retrieval methods are used to construct lower dimensional samples on the basis of the condition vector, in order to obtain these data and to construct partial relations. The methods are applied to the joint Soviet-French Arkad project.

  12. Vector representation of lithium and other mica compositions

    NASA Technical Reports Server (NTRS)

    Burt, Donald M.

    1991-01-01

    In contrast to mathematics, where a vector of one component defines a line, in chemical petrology a one-component system is a point, and two components are needed to define a line, three for a plane, and four for a space. Here, an attempt is made to show how these differences in the definition of a component can be resolved, with lithium micas used as an example. In particular, the condensed composition space theoretically accessible to Li-Fe-Al micas is shown to be an irregular three-dimensional polyhedron, rather than the triangle Al(3+)-Fe(2+)-Li(+), used by some researchers. This result is demonstrated starting with the annite composition and using exchange operators graphically as vectors that generate all of the other mica compositions.

  13. Derivation of formulas for root-mean-square errors in location, orientation, and shape in triangulation solution of an elongated object in space

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1974-01-01

    Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.

  14. Dengue and climate change in Australia: predictions for the future should incorporate knowledge from the past.

    PubMed

    Russell, Richard C; Currie, Bart J; Lindsay, Michael D; Mackenzie, John S; Ritchie, Scott A; Whelan, Peter I

    2009-03-02

    Dengue transmission in Australia is currently restricted to Queensland, where the vector mosquito Aedes aegypti is established. Locally acquired infections have been reported only from urban areas in the north-east of the state, where the vector is most abundant. Considerable attention has been drawn to the potential impact of climate change on dengue distribution within Australia, with projections for substantial rises in incidence and distribution associated with increasing temperatures. However, historical data show that much of Australia has previously sustained both the vector mosquito and dengue viruses. Although current vector distribution is restricted to Queensland, the area inhabited by A. aegypti is larger than the disease-transmission areas, and is not restricted by temperature (or vector-control programs); thus, it is unlikely that rising temperatures alone will bring increased vector or virus distribution. Factors likely to be important to dengue and vector distribution in the future include increased dengue activity in Asian and Pacific nations that would raise rates of virus importation by travellers, importation of vectors via international ports to regions without A. aegypti, higher rates of domestic collection and storage of water that would provide habitat in urban areas, and growing human populations in northern Australia. Past and recent successful control initiatives in Australia lend support to the idea that well resourced and functioning surveillance programs, and effective public health intervention capabilities, are essential to counter threats from dengue and other mosquito-borne diseases. Models projecting future activity of dengue (or other vector-borne disease) with climate change should carefully consider the local historical and contemporary data on the ecology and distribution of the vector and local virus transmission.

  15. Covariance estimation in Terms of Stokes Parameters with Application to Vector Sensor Imaging

    DTIC Science & Technology

    2016-12-15

    S. Klein, “HF Vector Sensor for Radio Astronomy : Ground Testing Results,” in AIAA SPACE 2016, ser. AIAA SPACE Forum, American Institute of... astronomy ,” in 2016 IEEE Aerospace Conference, Mar. 2016, pp. 1–17. doi: 10.1109/ AERO.2016.7500688. [4] K.-C. Ho, K.-C. Tan, and B. T. G. Tan, “Estimation of...Statistical Imaging in Radio Astronomy via an Expectation-Maximization Algorithm for Structured Covariance Estimation,” in Statistical Methods in Imaging: IN

  16. Lie theory and control systems defined on spheres

    NASA Technical Reports Server (NTRS)

    Brockett, R. W.

    1972-01-01

    It is shown that in constructing a theory for the most elementary class of control problems defined on spheres, some results from the Lie theory play a natural role. To understand controllability, optimal control, and certain properties of stochastic equations, Lie theoretic ideas are needed. The framework considered here is the most natural departure from the usual linear system/vector space problems which have dominated control systems literature. For this reason results are compared with those previously available for the finite dimensional vector space case.

  17. Space Object Classification Using Fused Features of Time Series Data

    NASA Astrophysics Data System (ADS)

    Jia, B.; Pham, K. D.; Blasch, E.; Shen, D.; Wang, Z.; Chen, G.

    In this paper, a fused feature vector consisting of raw time series and texture feature information is proposed for space object classification. The time series data includes historical orbit trajectories and asteroid light curves. The texture feature is derived from recurrence plots using Gabor filters for both unsupervised learning and supervised learning algorithms. The simulation results show that the classification algorithms using the fused feature vector achieve better performance than those using raw time series or texture features only.

  18. Secure coherent optical multi-carrier system with four-dimensional modulation space and Stokes vector scrambling.

    PubMed

    Zhang, Lijia; Liu, Bo; Xin, Xiangjun

    2015-06-15

    A secure enhanced coherent optical multi-carrier system based on Stokes vector scrambling is proposed and experimentally demonstrated. The optical signal with four-dimensional (4D) modulation space has been scrambled intra- and inter-subcarriers, where a multi-layer logistic map is adopted as the chaotic model. An experiment with 61.71-Gb/s encrypted multi-carrier signal is successfully demonstrated with the proposed method. The results indicate a promising solution for the physical secure optical communication.

  19. Using trees to compute approximate solutions to ordinary differential equations exactly

    NASA Technical Reports Server (NTRS)

    Grossman, Robert

    1991-01-01

    Some recent work is reviewed which relates families of trees to symbolic algorithms for the exact computation of series which approximate solutions of ordinary differential equations. It turns out that the vector space whose basis is the set of finite, rooted trees carries a natural multiplication related to the composition of differential operators, making the space of trees an algebra. This algebraic structure can be exploited to yield a variety of algorithms for manipulating vector fields and the series and algebras they generate.

  20. Local Gram-Schmidt and covariant Lyapunov vectors and exponents for three harmonic oscillator problems

    NASA Astrophysics Data System (ADS)

    Hoover, Wm. G.; Hoover, Carol G.

    2012-02-01

    We compare the Gram-Schmidt and covariant phase-space-basis-vector descriptions for three time-reversible harmonic oscillator problems, in two, three, and four phase-space dimensions respectively. The two-dimensional problem can be solved analytically. The three-dimensional and four-dimensional problems studied here are simultaneously chaotic, time-reversible, and dissipative. Our treatment is intended to be pedagogical, for use in an updated version of our book on Time Reversibility, Computer Simulation, and Chaos. Comments are very welcome.

  1. Sensitivity analysis of the space shuttle to ascent wind profiles

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Austin, L. D., Jr.

    1982-01-01

    A parametric sensitivity analysis of the space shuttle ascent flight to the wind profile is presented. Engineering systems parameters are obtained by flight simulations using wind profile models and samples of detailed (Jimsphere) wind profile measurements. The wind models used are the synthetic vector wind model, with and without the design gust, and a model of the vector wind change with respect to time. From these comparison analyses an insight is gained on the contribution of winds to ascent subsystems flight parameters.

  2. Finite Geometries in Quantum Theory:. from Galois (fields) to Hjelmslev (rings)

    NASA Astrophysics Data System (ADS)

    Saniga, Metod; Planat, Michel

    Geometries over Galois fields (and related finite combinatorial structures/algebras) have recently been recognized to play an ever-increasing role in quantum theory, especially when addressing properties of mutually unbiased bases (MUBs). The purpose of this contribution is to show that completely new vistas open up if we consider a generalized class of finite (projective) geometries, viz. those defined over Galois rings and/or other finite Hjelmslev rings. The case is illustrated by demonstrating that the basic combinatorial properties of a complete set of MUBs of a q-dimensional Hilbert space { H}q, q = pr with p being a prime and r a positive integer, are qualitatively mimicked by the configuration of points lying on a proper conic in a projective Hjelmslev plane defined over a Galois ring of characteristic p2 and rank r. The q vectors of a basis of { H}q correspond to the q points of a (so-called) neighbour class and the q + 1 MUBs answer to the total number of (pairwise disjoint) neighbour classes on the conic. Although this remarkable analogy is still established at the level of cardinalities only, we currently work on constructing an explicit mapping by associating a MUB to each neighbour class of the points of the conic and a state vector of this MUB to a particular point of the class. Further research in this direction may prove to be of great relevance for many areas of quantum information theory, in particular for quantum information processing.

  3. Modelling the effects of past and future climate on the risk of bluetongue emergence in Europe

    PubMed Central

    Guis, Helene; Caminade, Cyril; Calvete, Carlos; Morse, Andrew P.; Tran, Annelise; Baylis, Matthew

    2012-01-01

    Vector-borne diseases are among those most sensitive to climate because the ecology of vectors and the development rate of pathogens within them are highly dependent on environmental conditions. Bluetongue (BT), a recently emerged arboviral disease of ruminants in Europe, is often cited as an illustration of climate's impact on disease emergence, although no study has yet tested this association. Here, we develop a framework to quantitatively evaluate the effects of climate on BT's emergence in Europe by integrating high-resolution climate observations and model simulations within a mechanistic model of BT transmission risk. We demonstrate that a climate-driven model explains, in both space and time, many aspects of BT's recent emergence and spread, including the 2006 BT outbreak in northwest Europe which occurred in the year of highest projected risk since at least 1960. Furthermore, the model provides mechanistic insight into BT's emergence, suggesting that the drivers of emergence across Europe differ between the South and the North. Driven by simulated future climate from an ensemble of 11 regional climate models, the model projects increase in the future risk of BT emergence across most of Europe with uncertainty in rate but not in trend. The framework described here is adaptable and applicable to other diseases, where the link between climate and disease transmission risk can be quantified, permitting the evaluation of scale and uncertainty in climate change's impact on the future of such diseases. PMID:21697167

  4. Vector Data Model: A New Model of HDF-EOS to Support GIS Applications in EOS

    NASA Astrophysics Data System (ADS)

    Chi, E.; Edmonds, R d

    2001-05-01

    NASA's Earth Science Data Information System (ESDIS) project has an active program of research and development of systems for the storage and management of Earth science data for Earth Observation System (EOS) mission, a key program of NASA Earth Science Enterprise. EOS has adopted an extension of the Hierarchical Data Format (HDF) as the format of choice for standard product distribution. Three new EOS specific datatypes - point, swath and grid - have been defined within the HDF framework. The enhanced data format is named HDF-EOS. Geographic Information Systems (GIS) are used by Earth scientists in EOS data product generation, visualization, and analysis. There are two major data types in GIS applications, raster and vector. The current HDF-EOS handles only raster type in the swath data model. The vector data model is identified and developed as a new HDFEOS format to meet the requirements of scientists working with EOS data products in vector format. The vector model is designed using a topological data structure, which defines the spatial relationships among points, lines, and polygons. The three major topological concepts that the vector model adopts are: a) lines connect to each other at nodes (connectivity), b) lines that connect to surround an area define a polygon (area definition), and c) lines have direction and left and right sides (contiguity). The vector model is implemented in HDF by mapping the conceptual model to HDF internal data models and structures, viz. Vdata, Vgroup, and their associated attribute structures. The point, line, and polygon geometry and attribute data are stored in similar tables. Further, the vector model utilizes the structure and product metadata, which characterize the HDF-EOS. Both types of metadata are stored as attributes in HDF-EOS files, and are encoded in text format by using Object Description Language (ODL) and stored as global attributes in HDF-EOS files. EOS has developed a series of routines for storing, retrieving, and manipulating vector data in category of access, definition, basic I/O, inquiry, and subsetting. The routines are tested and form a package, HDF-EOS/Vector. The alpha version of HDFEOS/Vector has been distributed through the HDF-EOS project web site at http://hdfeos.gsfc.nasa.gov. We are also developing translators between HDF-EOS vector format and variety of GIS formats, such as Shapefile. The HDF-EOS vector model enables EOS scientists to deliver EOS data in a way ready for Earth scientists to analyze using GIS software, and also provides EOS project a mechanism to store GIS data product in meaningful vector format with significant economy in storage.

  5. Eco-bio-social research on community-based approaches for Chagas disease vector control in Latin America.

    PubMed

    Gürtler, Ricardo E; Yadon, Zaida E

    2015-02-01

    This article provides an overview of three research projects which designed and implemented innovative interventions for Chagas disease vector control in Bolivia, Guatemala and Mexico. The research initiative was based on sound principles of community-based ecosystem management (ecohealth), integrated vector management, and interdisciplinary analysis. The initial situational analysis achieved a better understanding of ecological, biological and social determinants of domestic infestation. The key factors identified included: housing quality; type of peridomestic habitats; presence and abundance of domestic dogs, chickens and synanthropic rodents; proximity to public lights; location in the periphery of the village. In Bolivia, plastering of mud walls with appropriate local materials and regular cleaning of beds and of clothes next to the walls, substantially decreased domestic infestation and abundance of the insect vector Triatoma infestans. The Guatemalan project revealed close links between house infestation by rodents and Triatoma dimidiata, and vector infection with Trypanosoma cruzi. A novel community-operated rodent control program significantly reduced rodent infestation and bug infection. In Mexico, large-scale implementation of window screens translated into promising reductions in domestic infestation. A multi-pronged approach including community mobilisation and empowerment, intersectoral cooperation and adhesion to integrated vector management principles may be the key to sustainable vector and disease control in the affected regions. © World Health Organization 2015. The World Health Organization has granted Oxford University Press permission for the reproduction of this article.

  6. Next Generation NASA Initiative for Space Geodesy

    NASA Technical Reports Server (NTRS)

    Merkowitz, S. M.; Desai, S.; Gross, R. S.; Hilliard, L.; Lemoine, F. G.; Long, J. L.; Ma, C.; McGarry J. F.; Murphy, D.; Noll, C. E.; hide

    2012-01-01

    Space geodesy measurement requirements have become more and more stringent as our understanding of the physical processes and our modeling techniques have improved. In addition, current and future spacecraft will have ever-increasing measurement capability and will lead to increasingly sophisticated models of changes in the Earth system. Ground-based space geodesy networks with enhanced measurement capability will be essential to meeting these oncoming requirements and properly interpreting the sate1!ite data. These networks must be globally distributed and built for longevity, to provide the robust data necessary to generate improved models for proper interpretation ofthe observed geophysical signals. These requirements have been articulated by the Global Geodetic Observing System (GGOS). The NASA Space Geodesy Project (SGP) is developing a prototype core site as the basis for a next generation Space Geodetic Network (SGN) that would be NASA's contribution to a global network designed to produce the higher quality data required to maintain the Terrestrial Reference Frame and provide information essential for fully realizing the measurement potential of the current and coming generation of Earth Observing spacecraft. Each of the sites in the SGN would include co-located, state of-the-art systems from all four space geodetic observing techniques (GNSS, SLR, VLBI, and DORIS). The prototype core site is being developed at NASA's Geophysical and Astronomical Observatory at Goddard Space Flight Center. The project commenced in 2011 and is scheduled for completion in late 2013. In January 2012, two multiconstellation GNSS receivers, GODS and GODN, were established at the prototype site as part of the local geodetic network. Development and testing are also underway on the next generation SLR and VLBI systems along with a modern DORIS station. An automated survey system is being developed to measure inter-technique vector ties, and network design studies are being performed to define the appropriate number and distribution of these next generation space geodetic core sites that are required to achieve the driving ITRF requirements. We present the status of this prototype next generation space geodetic core site, results from the analysis of data from the established geodetic stations, and results from the ongoing network design studies.

  7. Dynamics and Synchronization of Nonlinear Oscillators with Time Delays: A Study with Fiber Lasers

    DTIC Science & Technology

    2007-07-19

    or coupling lines PC Polarization Controller PD Photodetector VA Variable Attenuator WDM Wavelength Division Multiplexer x Chapter 1 Introduction 1.1...lasers and detectors. Injection locking of lasers is a common practice that can be used to lock the frequency and phase of a laser to an injected signal...finding a basis vector that maximizes the mean squared projection of the data. Succeeding basis vectors are found that max- imize the projection with the

  8. Recent Developments In Theory Of Balanced Linear Systems

    NASA Technical Reports Server (NTRS)

    Gawronski, Wodek

    1994-01-01

    Report presents theoretical study of some issues of controllability and observability of system represented by linear, time-invariant mathematical model of the form. x = Ax + Bu, y = Cx + Du, x(0) = xo where x is n-dimensional vector representing state of system; u is p-dimensional vector representing control input to system; y is q-dimensional vector representing output of system; n,p, and q are integers; x(0) is intial (zero-time) state vector; and set of matrices (A,B,C,D) said to constitute state-space representation of system.

  9. Symbolic Computation Using Cellular Automata-Based Hyperdimensional Computing.

    PubMed

    Yilmaz, Ozgur

    2015-12-01

    This letter introduces a novel framework of reservoir computing that is capable of both connectionist machine intelligence and symbolic computation. A cellular automaton is used as the reservoir of dynamical systems. Input is randomly projected onto the initial conditions of automaton cells, and nonlinear computation is performed on the input via application of a rule in the automaton for a period of time. The evolution of the automaton creates a space-time volume of the automaton state space, and it is used as the reservoir. The proposed framework is shown to be capable of long-term memory, and it requires orders of magnitude less computation compared to echo state networks. As the focus of the letter, we suggest that binary reservoir feature vectors can be combined using Boolean operations as in hyperdimensional computing, paving a direct way for concept building and symbolic processing. To demonstrate the capability of the proposed system, we make analogies directly on image data by asking, What is the automobile of air?

  10. Lagrangian Descriptors: A Method for Revealing Phase Space Structures of General Time Dependent Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Mancho, Ana M.; Wiggins, Stephen; Curbelo, Jezabel; Mendoza, Carolina

    2013-11-01

    Lagrangian descriptors are a recent technique which reveals geometrical structures in phase space and which are valid for aperiodically time dependent dynamical systems. We discuss a general methodology for constructing them and we discuss a ``heuristic argument'' that explains why this method is successful. We support this argument by explicit calculations on a benchmark problem. Several other benchmark examples are considered that allow us to assess the performance of Lagrangian descriptors with both finite time Lyapunov exponents (FTLEs) and finite time averages of certain components of the vector field (``time averages''). In all cases Lagrangian descriptors are shown to be both more accurate and computationally efficient than these methods. We thank CESGA for computing facilities. This research was supported by MINECO grants: MTM2011-26696, I-Math C3-0104, ICMAT Severo Ochoa project SEV-2011-0087, and CSIC grant OCEANTECH. SW acknowledges the support of the ONR (Grant No. N00014-01-1-0769).

  11. DataWarrior: an open-source program for chemistry aware data visualization and analysis.

    PubMed

    Sander, Thomas; Freyss, Joel; von Korff, Modest; Rufener, Christian

    2015-02-23

    Drug discovery projects in the pharmaceutical industry accumulate thousands of chemical structures and ten-thousands of data points from a dozen or more biological and pharmacological assays. A sufficient interpretation of the data requires understanding, which molecular families are present, which structural motifs correlate with measured properties, and which tiny structural changes cause large property changes. Data visualization and analysis software with sufficient chemical intelligence to support chemists in this task is rare. In an attempt to contribute to filling the gap, we released our in-house developed chemistry aware data analysis program DataWarrior for free public use. This paper gives an overview of DataWarrior's functionality and architecture. Exemplarily, a new unsupervised, 2-dimensional scaling algorithm is presented, which employs vector-based or nonvector-based descriptors to visualize the chemical or pharmacophore space of even large data sets. DataWarrior uses this method to interactively explore chemical space, activity landscapes, and activity cliffs.

  12. Simulation of Combustion Systems with Realistic g-Jitter

    NASA Technical Reports Server (NTRS)

    Mell, W. E.; McGrattan, K. B.; Nakamura, Y.; Baum, H. R.

    2001-01-01

    A number of facilities are available for microgravity combustion experiments: aircraft, drop towers, sounding rockets, the space shuttle, and, in the future, the International Space Station (ISS). Acceleration disturbances or g-jitter about the background level of reduced gravity exist in all these microgravity facilities. While g-jitter is routinely measured, a quantitative comparison of the quality of g-jitter among the different microgravity facilities, in terms of its affects on combustion experiments, has not been compiled. Low frequency g-jitter (< 1 Hz) has been repeatedly observed to disturb a number of combustion systems. Guidelines regarding tolerable levels of acceleration disturbances for combustion experiments have been developed for use in the design of ISS experiments. The validity of these guidelines, however, remains unknown. In this project a transient, 3-D numerical model is under development to simulate the effects of realistic g-jitter on a number of combustion systems. The measured acceleration vector or some representation of it can be used as input to the simulation.

  13. Searching for transcription factor binding sites in vector spaces

    PubMed Central

    2012-01-01

    Background Computational approaches to transcription factor binding site identification have been actively researched in the past decade. Learning from known binding sites, new binding sites of a transcription factor in unannotated sequences can be identified. A number of search methods have been introduced over the years. However, one can rarely find one single method that performs the best on all the transcription factors. Instead, to identify the best method for a particular transcription factor, one usually has to compare a handful of methods. Hence, it is highly desirable for a method to perform automatic optimization for individual transcription factors. Results We proposed to search for transcription factor binding sites in vector spaces. This framework allows us to identify the best method for each individual transcription factor. We further introduced two novel methods, the negative-to-positive vector (NPV) and optimal discriminating vector (ODV) methods, to construct query vectors to search for binding sites in vector spaces. Extensive cross-validation experiments showed that the proposed methods significantly outperformed the ungapped likelihood under positional background method, a state-of-the-art method, and the widely-used position-specific scoring matrix method. We further demonstrated that motif subtypes of a TF can be readily identified in this framework and two variants called the k NPV and k ODV methods benefited significantly from motif subtype identification. Finally, independent validation on ChIP-seq data showed that the ODV and NPV methods significantly outperformed the other compared methods. Conclusions We conclude that the proposed framework is highly flexible. It enables the two novel methods to automatically identify a TF-specific subspace to search for binding sites. Implementations are available as source code at: http://biogrid.engr.uconn.edu/tfbs_search/. PMID:23244338

  14. Adaptive robust fault-tolerant control for linear MIMO systems with unmatched uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Kangkang; Jiang, Bin; Yan, Xing-Gang; Mao, Zehui

    2017-10-01

    In this paper, two novel fault-tolerant control design approaches are proposed for linear MIMO systems with actuator additive faults, multiplicative faults and unmatched uncertainties. For time-varying multiplicative and additive faults, new adaptive laws and additive compensation functions are proposed. A set of conditions is developed such that the unmatched uncertainties are compensated by actuators in control. On the other hand, for unmatched uncertainties with their projection in unmatched space being not zero, based on a (vector) relative degree condition, additive functions are designed to compensate for the uncertainties from output channels in the presence of actuator faults. The developed fault-tolerant control schemes are applied to two aircraft systems to demonstrate the efficiency of the proposed approaches.

  15. An ensemble of SVM classifiers based on gene pairs.

    PubMed

    Tong, Muchenxuan; Liu, Kun-Hong; Xu, Chungui; Ju, Wenbin

    2013-07-01

    In this paper, a genetic algorithm (GA) based ensemble support vector machine (SVM) classifier built on gene pairs (GA-ESP) is proposed. The SVMs (base classifiers of the ensemble system) are trained on different informative gene pairs. These gene pairs are selected by the top scoring pair (TSP) criterion. Each of these pairs projects the original microarray expression onto a 2-D space. Extensive permutation of gene pairs may reveal more useful information and potentially lead to an ensemble classifier with satisfactory accuracy and interpretability. GA is further applied to select an optimized combination of base classifiers. The effectiveness of the GA-ESP classifier is evaluated on both binary-class and multi-class datasets. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Self-organizing neural networks--an alternative way of cluster analysis in clinical chemistry.

    PubMed

    Reibnegger, G; Wachter, H

    1996-04-15

    Supervised learning schemes have been employed by several workers for training neural networks designed to solve clinical problems. We demonstrate that unsupervised techniques can also produce interesting and meaningful results. Using a data set on the chemical composition of milk from 22 different mammals, we demonstrate that self-organizing feature maps (Kohonen networks) as well as a modified version of error backpropagation technique yield results mimicking conventional cluster analysis. Both techniques are able to project a potentially multi-dimensional input vector onto a two-dimensional space whereby neighborhood relationships remain conserved. Thus, these techniques can be used for reducing dimensionality of complicated data sets and for enhancing comprehensibility of features hidden in the data matrix.

  17. A thick-walled sphere rotating in a uniform magnetic field: The next step to de-spin a space object

    NASA Astrophysics Data System (ADS)

    Nurge, Mark A.; Youngquist, Robert C.; Caracciolo, Ryan A.; Peck, Mason; Leve, Frederick A.

    2017-08-01

    Modeling the interaction between a moving conductor and a static magnetic field is critical to understanding the operation of induction motors, eddy current braking, and the dynamics of satellites moving through Earth's magnetic field. Here, we develop the case of a thick-walled sphere rotating in a uniform magnetic field, which is the simplest, non-trivial, magneto-statics problem that leads to complete closed-form expressions for the resulting potentials, fields, and currents. This solution requires knowledge of all of Maxwell's time independent equations, scalar and vector potential equations, and the Lorentz force law. The paper presents four cases and their associated experimental results, making this topic appropriate for an advanced student lab project.

  18. SVPWM Technique with Varying DC-Link Voltage for Common Mode Voltage Reduction in a Matrix Converter and Analytical Estimation of its Output Voltage Distortion

    NASA Astrophysics Data System (ADS)

    Padhee, Varsha

    Common Mode Voltage (CMV) in any power converter has been the major contributor to premature motor failures, bearing deterioration, shaft voltage build up and electromagnetic interference. Intelligent control methods like Space Vector Pulse Width Modulation (SVPWM) techniques provide immense potential and flexibility to reduce CMV, thereby targeting all the afore mentioned problems. Other solutions like passive filters, shielded cables and EMI filters add to the volume and cost metrics of the entire system. Smart SVPWM techniques therefore, come with a very important advantage of being an economical solution. This thesis discusses a modified space vector technique applied to an Indirect Matrix Converter (IMC) which results in the reduction of common mode voltages and other advanced features. The conventional indirect space vector pulse-width modulation (SVPWM) method of controlling matrix converters involves the usage of two adjacent active vectors and one zero vector for both rectifying and inverting stages of the converter. By suitable selection of space vectors, the rectifying stage of the matrix converter can generate different levels of virtual DC-link voltage. This capability can be exploited for operation of the converter in different ranges of modulation indices for varying machine speeds. This results in lower common mode voltage and improves the harmonic spectrum of the output voltage, without increasing the number of switching transitions as compared to conventional modulation. To summarize it can be said that the responsibility of formulating output voltages with a particular magnitude and frequency has been transferred solely to the rectifying stage of the IMC. Estimation of degree of distortion in the three phase output voltage is another facet discussed in this thesis. An understanding of the SVPWM technique and the switching sequence of the space vectors in detail gives the potential to estimate the RMS value of the switched output voltage of any converter. This conceivably aids the sizing and design of output passive filters. An analytical estimation method has been presented to achieve this purpose for am IMC. Knowledge of the fundamental component in output voltage can be utilized to calculate its Total Harmonic Distortion (THD). The effectiveness of the proposed SVPWM algorithms and the analytical estimation technique is substantiated by simulations in MATLAB / Simulink and experiments on a laboratory prototype of the IMC. Proper comparison plots have been provided to contrast the performance of the proposed methods with the conventional SVPWM method. The behavior of output voltage distortion and CMV with variation in operating parameters like modulation index and output frequency has also been analyzed.

  19. An optimized color transformation for the analysis of digital images of hematoxylin & eosin stained slides.

    PubMed

    Zarella, Mark D; Breen, David E; Plagov, Andrei; Garcia, Fernando U

    2015-01-01

    Hematoxylin and eosin (H&E) staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structures (e.g., cytoplasm, stroma). By reducing these maps into their constituent pixels in color space, an optimal reference vector is obtained for each structure, which identifies the color attributes that maximally distinguish one structure from other elements in the image. We show that tissue structures can be identified using this semi-automated technique. By comparing structure centroids across different images, we obtained a quantitative depiction of H&E variability for each structure. This measurement can potentially be utilized in the laboratory to help calibrate daily staining or identify troublesome slides. Moreover, by aligning reference vectors derived from this technique, images can be transformed in a way that standardizes their color properties and makes them more amenable to image processing.

  20. Hydrologic Process Parameterization of Electrical Resistivity Imaging of Solute Plumes Using POD McMC

    NASA Astrophysics Data System (ADS)

    Awatey, M. T.; Irving, J.; Oware, E. K.

    2016-12-01

    Markov chain Monte Carlo (McMC) inversion frameworks are becoming increasingly popular in geophysics due to their ability to recover multiple equally plausible geologic features that honor the limited noisy measurements. Standard McMC methods, however, become computationally intractable with increasing dimensionality of the problem, for example, when working with spatially distributed geophysical parameter fields. We present a McMC approach based on a sparse proper orthogonal decomposition (POD) model parameterization that implicitly incorporates the physics of the underlying process. First, we generate training images (TIs) via Monte Carlo simulations of the target process constrained to a conceptual model. We then apply POD to construct basis vectors from the TIs. A small number of basis vectors can represent most of the variability in the TIs, leading to dimensionality reduction. A projection of the starting model into the reduced basis space generates the starting POD coefficients. At each iteration, only coefficients within a specified sampling window are resimulated assuming a Gaussian prior. The sampling window grows at a specified rate as the number of iteration progresses starting from the coefficients corresponding to the highest ranked basis to those of the least informative basis. We found this gradual increment in the sampling window to be more stable compared to resampling all the coefficients right from the first iteration. We demonstrate the performance of the algorithm with both synthetic and lab-scale electrical resistivity imaging of saline tracer experiments, employing the same set of basis vectors for all inversions. We consider two scenarios of unimodal and bimodal plumes. The unimodal plume is consistent with the hypothesis underlying the generation of the TIs whereas bimodality in plume morphology was not theorized. We show that uncertainty quantification using McMC can proceed in the reduced dimensionality space while accounting for the physics of the underlying process.

  1. Climate change effects on Chikungunya transmission in Europe: geospatial analysis of vector's climatic suitability and virus' temperature requirements.

    PubMed

    Fischer, Dominik; Thomas, Stephanie M; Suk, Jonathan E; Sudre, Bertrand; Hess, Andrea; Tjaden, Nils B; Beierkuhnlein, Carl; Semenza, Jan C

    2013-11-12

    Chikungunya was, from the European perspective, considered to be a travel-related tropical mosquito-borne disease prior to the first European outbreak in Northern Italy in 2007. This was followed by cases of autochthonous transmission reported in South-eastern France in 2010. Both events occurred after the introduction, establishment and expansion of the Chikungunya-competent and highly invasive disease vector Aedes albopictus (Asian tiger mosquito) in Europe. In order to assess whether these outbreaks are indicative of the beginning of a trend or one-off events, there is a need to further examine the factors driving the potential transmission of Chikungunya in Europe. The climatic suitability, both now and in the future, is an essential starting point for such an analysis. The climatic suitability for Chikungunya outbreaks was determined by using bioclimatic factors that influence, both vector and, pathogen. Climatic suitability for the European distribution of the vector Aedes albopictus was based upon previous correlative environmental niche models. Climatic risk classes were derived by combining climatic suitability for the vector with known temperature requirements for pathogen transmission, obtained from outbreak regions. In addition, the longest potential intra-annual season for Chikungunya transmission was estimated for regions with expected vector occurrences.In order to analyse spatio-temporal trends for risk exposure and season of transmission in Europe, climate change impacts are projected for three time-frames (2011-2040, 2041-2070 and 2071-2100) and two climate scenarios (A1B and B1) from the Intergovernmental Panel on Climate Change (IPCC). These climatic projections are based on regional climate model COSMO-CLM, which builds on the global model ECHAM5. European areas with current and future climatic suitability of Chikungunya transmission are identified. An increase in risk is projected for Western Europe (e.g. France and Benelux-States) in the first half of the 21st century and from mid-century onwards for central parts of Europe (e.g. Germany). Interestingly, the southernmost parts of Europe do not generally provide suitable conditions in these projections. Nevertheless, many Mediterranean regions will persist to be climatically suitable for transmission. Overall, the highest risk of transmission by the end of the 21st century was projected for France, Northern Italy and the Pannonian Basin (East-Central Europe). This general tendency is depicted in both, the A1B and B1 climate change scenarios. In order to guide preparedness for further outbreaks, it is crucial to anticipate risk as to identify areas where specific public health measures, such as surveillance and vector control, can be implemented. However, public health practitioners need to be aware that climate is only one factor driving the transmission of vector-borne disease.

  2. AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.

  3. A Bag of Concepts Approach for Biomedical Document Classification Using Wikipedia Knowledge.

    PubMed

    Mouriño-García, Marcos A; Pérez-Rodríguez, Roberto; Anido-Rifón, Luis E

    2017-01-01

    The ability to efficiently review the existing literature is essential for the rapid progress of research. This paper describes a classifier of text documents, represented as vectors in spaces of Wikipedia concepts, and analyses its suitability for classification of Spanish biomedical documents when only English documents are available for training. We propose the cross-language concept matching (CLCM) technique, which relies on Wikipedia interlanguage links to convert concept vectors from the Spanish to the English space. The performance of the classifier is compared to several baselines: a classifier based on machine translation, a classifier that represents documents after performing Explicit Semantic Analysis (ESA), and a classifier that uses a domain-specific semantic an- notator (MetaMap). The corpus used for the experiments (Cross-Language UVigoMED) was purpose-built for this study, and it is composed of 12,832 English and 2,184 Spanish MEDLINE abstracts. The performance of our approach is superior to any other state-of-the art classifier in the benchmark, with performance increases up to: 124% over classical machine translation, 332% over MetaMap, and 60 times over the classifier based on ESA. The results have statistical significance, showing p-values < 0.0001. Using knowledge mined from Wikipedia to represent documents as vectors in a space of Wikipedia concepts and translating vectors between language-specific concept spaces, a cross-language classifier can be built, and it performs better than several state-of-the-art classifiers. Schattauer GmbH.

  4. A Bag of Concepts Approach for Biomedical Document Classification Using Wikipedia Knowledge*. Spanish-English Cross-language Case Study.

    PubMed

    Mouriño-García, Marcos A; Pérez-Rodríguez, Roberto; Anido-Rifón, Luis E

    2017-10-26

    The ability to efficiently review the existing literature is essential for the rapid progress of research. This paper describes a classifier of text documents, represented as vectors in spaces of Wikipedia concepts, and analyses its suitability for classification of Spanish biomedical documents when only English documents are available for training. We propose the cross-language concept matching (CLCM) technique, which relies on Wikipedia interlanguage links to convert concept vectors from the Spanish to the English space. The performance of the classifier is compared to several baselines: a classifier based on machine translation, a classifier that represents documents after performing Explicit Semantic Analysis (ESA), and a classifier that uses a domain-specific semantic annotator (MetaMap). The corpus used for the experiments (Cross-Language UVigoMED) was purpose-built for this study, and it is composed of 12,832 English and 2,184 Spanish MEDLINE abstracts. The performance of our approach is superior to any other state-of-the art classifier in the benchmark, with performance increases up to: 124% over classical machine translation, 332% over MetaMap, and 60 times over the classifier based on ESA. The results have statistical significance, showing p-values < 0.0001. Using knowledge mined from Wikipedia to represent documents as vectors in a space of Wikipedia concepts and translating vectors between language-specific concept spaces, a cross-language classifier can be built, and it performs better than several state-of-the-art classifiers.

  5. Implementation of the Orbital Maneuvering Systems Engine and Thrust Vector Control for the European Service Module

    NASA Technical Reports Server (NTRS)

    Millard, Jon

    2014-01-01

    The European Space Agency (ESA) has entered into a partnership with the National Aeronautics and Space Administration (NASA) to develop and provide the Service Module (SM) for the Orion Multipurpose Crew Vehicle (MPCV) Program. The European Service Module (ESM) will provide main engine thrust by utilizing the Space Shuttle Program Orbital Maneuvering System Engine (OMS-E). Thrust Vector Control (TVC) of the OMS-E will be provided by the Orbital Maneuvering System (OMS) TVC, also used during the Space Shuttle Program. NASA will be providing the OMS-E and OMS TVC to ESA as Government Furnished Equipment (GFE) to integrate into the ESM. This presentation will describe the OMS-E and OMS TVC and discuss the implementation of the hardware for the ESM.

  6. Modal vector estimation for closely spaced frequency modes

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.; Chung, Y. T.; Blair, M.

    1982-01-01

    Techniques for obtaining improved modal vector estimates for systems with closely spaced frequency modes are discussed. In describing the dynamical behavior of a complex structure modal parameters are often analyzed: undamped natural frequency, mode shape, modal mass, modal stiffness and modal damping. From both an analytical standpoint and an experimental standpoint, identification of modal parameters is more difficult if the system has repeated frequencies or even closely spaced frequencies. The more complex the structure, the more likely it is to have closely spaced frequencies. This makes it difficult to determine valid mode shapes using single shaker test methods. By employing band selectable analysis (zoom) techniques and by employing Kennedy-Pancu circle fitting or some multiple degree of freedom (MDOF) curve fit procedure, the usefulness of the single shaker approach can be extended.

  7. A novel weld seam detection method for space weld seam of narrow butt joint in laser welding

    NASA Astrophysics Data System (ADS)

    Shao, Wen Jun; Huang, Yu; Zhang, Yong

    2018-02-01

    Structured light measurement is widely used for weld seam detection owing to its high measurement precision and robust. However, there is nearly no geometrical deformation of the stripe projected onto weld face, whose seam width is less than 0.1 mm and without misalignment. So, it's very difficult to ensure an exact retrieval of the seam feature. This issue is raised as laser welding for butt joint of thin metal plate is widely applied. Moreover, measurement for the seam width, seam center and the normal vector of the weld face at the same time during welding process is of great importance to the welding quality but rarely reported. Consequently, a seam measurement method based on vision sensor for space weld seam of narrow butt joint is proposed in this article. Three laser stripes with different wave length are project on the weldment, in which two red laser stripes are designed and used to measure the three dimensional profile of the weld face by the principle of optical triangulation, and the third green laser stripe is used as light source to measure the edge and the centerline of the seam by the principle of passive vision sensor. The corresponding image process algorithm is proposed to extract the centerline of the red laser stripes as well as the seam feature. All these three laser stripes are captured and processed in a single image so that the three dimensional position of the space weld seam can be obtained simultaneously. Finally, the result of experiment reveals that the proposed method can meet the precision demand of space narrow butt joint.

  8. Cosmology in generalized Proca theories

    NASA Astrophysics Data System (ADS)

    De Felice, Antonio; Heisenberg, Lavinia; Kase, Ryotaro; Mukohyama, Shinji; Tsujikawa, Shinji; Zhang, Ying-li

    2016-06-01

    We consider a massive vector field with derivative interactions that propagates only the 3 desired polarizations (besides two tensor polarizations from gravity) with second-order equations of motion in curved space-time. The cosmological implications of such generalized Proca theories are investigated for both the background and the linear perturbation by taking into account the Lagrangian up to quintic order. In the presence of a matter fluid with a temporal component of the vector field, we derive the background equations of motion and show the existence of de Sitter solutions relevant to the late-time cosmic acceleration. We also obtain conditions for the absence of ghosts and Laplacian instabilities of tensor, vector, and scalar perturbations in the small-scale limit. Our results are applied to concrete examples of the general functions in the theory, which encompass vector Galileons as a specific case. In such examples, we show that the de Sitter fixed point is always a stable attractor and study viable parameter spaces in which the no-ghost and stability conditions are satisfied during the cosmic expansion history.

  9. GNSS Single Frequency, Single Epoch Reliable Attitude Determination Method with Baseline Vector Constraint.

    PubMed

    Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong

    2015-12-02

    For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.

  10. Ecological Niche Modelling Predicts Southward Expansion of Lutzomyia (Nyssomyia) flaviscutellata (Diptera: Psychodidae: Phlebotominae), Vector of Leishmania (Leishmania) amazonensis in South America, under Climate Change.

    PubMed

    Carvalho, Bruno M; Rangel, Elizabeth F; Ready, Paul D; Vale, Mariana M

    2015-01-01

    Vector borne diseases are susceptible to climate change because distributions and densities of many vectors are climate driven. The Amazon region is endemic for cutaneous leishmaniasis and is predicted to be severely impacted by climate change. Recent records suggest that the distributions of Lutzomyia (Nyssomyia) flaviscutellata and the parasite it transmits, Leishmania (Leishmania) amazonensis, are expanding southward, possibly due to climate change, and sometimes associated with new human infection cases. We define the vector's climatic niche and explore future projections under climate change scenarios. Vector occurrence records were compiled from the literature, museum collections and Brazilian Health Departments. Six bioclimatic variables were used as predictors in six ecological niche model algorithms (BIOCLIM, DOMAIN, MaxEnt, GARP, logistic regression and Random Forest). Projections for 2050 used 17 general circulation models in two greenhouse gas representative concentration pathways: "stabilization" and "high increase". Ensemble models and consensus maps were produced by overlapping binary predictions. Final model outputs showed good performance and significance. The use of species absence data substantially improved model performance. Currently, L. flaviscutellata is widely distributed in the Amazon region, with records in the Atlantic Forest and savannah regions of Central Brazil. Future projections indicate expansion of the climatically suitable area for the vector in both scenarios, towards higher latitudes and elevations. L. flaviscutellata is likely to find increasingly suitable conditions for its expansion into areas where human population size and density are much larger than they are in its current locations. If environmental conditions change as predicted, the range of the vector is likely to expand to southeastern and central-southern Brazil, eastern Paraguay and further into the Amazonian areas of Bolivia, Peru, Ecuador, Colombia and Venezuela. These areas will only become endemic for L. amazonensis, however, if they have competent reservoir hosts and transmission dynamics matching those in the Amazon region.

  11. Use of digital control theory state space formalism for feedback at SLC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Himel, T.; Hendrickson, L.; Rouse, F.

    The algorithms used in the database-driven SLC fast-feedback system are based on the state space formalism of digital control theory. These are implemented as a set of matrix equations which use a Kalman filter to estimate a vector of states from a vector of measurements, and then apply a gain matrix to determine the actuator settings from the state vector. The matrices used in the calculation are derived offline using Linear Quadratic Gaussian minimization. For a given noise spectrum, this procedure minimizes the rms of the states (e.g., the position or energy of the beam). The offline program also allowsmore » simulation of the loop's response to arbitrary inputs, and calculates its frequency response. 3 refs., 3 figs.« less

  12. A novel double fine guide sensor design on space telescope

    NASA Astrophysics Data System (ADS)

    Zhang, Xu-xu; Yin, Da-yi

    2018-02-01

    To get high precision attitude for space telescope, a double marginal FOV (field of view) FGS (Fine Guide Sensor) is proposed. It is composed of two large area APS CMOS sensors and both share the same lens in main light of sight. More star vectors can be get by two FGS and be used for high precision attitude determination. To improve star identification speed, the vector cross product in inter-star angles for small marginal FOV different from traditional way is elaborated and parallel processing method is applied to pyramid algorithm. The star vectors from two sensors are then used to attitude fusion with traditional QUEST algorithm. The simulation results show that the system can get high accuracy three axis attitudes and the scheme is feasibility.

  13. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    NASA Technical Reports Server (NTRS)

    Nielson, Gregory M.

    1997-01-01

    This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.

  14. Predication-based semantic indexing: permutations as a means to encode predications in semantic space.

    PubMed

    Cohen, Trevor; Schvaneveldt, Roger W; Rindflesch, Thomas C

    2009-11-14

    Corpus-derived distributional models of semantic distance between terms have proved useful in a number of applications. For both theoretical and practical reasons, it is desirable to extend these models to encode discrete concepts and the ways in which they are related to one another. In this paper, we present a novel vector space model that encodes semantic predications derived from MEDLINE by the SemRep system into a compact spatial representation. The associations captured by this method are of a different and complementary nature to those derived by traditional vector space models, and the encoding of predication types presents new possibilities for knowledge discovery and information retrieval.

  15. Real-time optical laboratory solution of parabolic differential equations

    NASA Technical Reports Server (NTRS)

    Casasent, David; Jackson, James

    1988-01-01

    An optical laboratory matrix-vector processor is used to solve parabolic differential equations (the transient diffusion equation with two space variables and time) by an explicit algorithm. This includes optical matrix-vector nonbase-2 encoded laboratory data, the combination of nonbase-2 and frequency-multiplexed data on such processors, a high-accuracy optical laboratory solution of a partial differential equation, new data partitioning techniques, and a discussion of a multiprocessor optical matrix-vector architecture.

  16. Climate Change and Vector Borne Diseases on NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Cole, Stuart K.; DeYoung, Russell J.; Shepanek, Marc A.; Kamel, Ahmed

    2014-01-01

    Increasing global temperature, weather patterns with above average storm intensities, and higher sea levels have been identified as phenomena associated with global climate change. As a causal system, climate change could contribute to vector borne diseases in humans. Vectors of concern originate from the vicinity of Langley Research Center include mosquitos and ticks that transmit disease that originate regionally, nationwide, or from outside the US. Recognizing changing conditions, vector borne diseases propagate under climate change conditions, and understanding the conditions in which they may exist or propagate, presents opportunities for monitoring their progress and mitigating their potential impacts through communication, continued monitoring, and adaptation. Personnel comprise a direct and fundamental support to NASA mission success, continuous and improved understanding of climatic conditions, and the resulting consequence of disease from these conditions, helps to reduce risk in terrestrial space technologies, ground operations, and space research. This research addresses conditions which are attributed to climatic conditions which promote environmental conditions conducive to the increase of disease vectors. This investigation includes evaluation of local mosquito population count and rainfall data for statistical correlation and identification of planning recommendations unique to LaRC, other NASA Centers to assess adaptation approaches, Center-level planning strategies.

  17. Global Positioning Systems (GPS) Technology to Study Vector-Pathogen-Host Interactions

    DTIC Science & Technology

    2016-12-01

    Award Number: W81XWH-11-2-0175 TITLE: Global Positioning Systems (GPS) Technology to Study Vector-Pathogen-Host Interactions PRINCIPAL...Positioning Systems (GPS) Technology to Study Vector-Pathogen-Host Interactions 5b. GRANT NUMBER W81XWH-11-2-0175 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...objective of this project is to examine the evolutionary consequences of introducing a tetravalent live- attenuated dengue virus vaccine into children in

  18. Achieving High Performance on the i860 Microprocessor

    NASA Technical Reports Server (NTRS)

    Lee, King; Kutler, Paul (Technical Monitor)

    1998-01-01

    The i860 is a high performance microprocessor used in the Intel Touchstone project. This paper proposes a paradigm for programming the i860 that is modelled on the vector instructions of the Cray computers. Fortran callable assembler subroutines were written that mimic the concurrent vector instructions of the Cray. Cache takes the place of vector registers. Using this paradigm we have achieved twice the performance of compiled code on a traditional solve.

  19. The Levi-Civita Tensor and Identities in Vector Analysis. Vector Field Identities. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 427.

    ERIC Educational Resources Information Center

    Yiu, Chang-li; Wilde, Carroll O.

    Vector analysis is viewed to play a key role in many branches of engineering and the physical sciences. This unit is geared towards deriving identities and establishing "machinery" to make derivations a routine task. It is noted that the module is not an applications unit, but has as its primary objective the goal of providing science,…

  20. A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization

    NASA Astrophysics Data System (ADS)

    Binz, Ernst; Pods, Sonja

    2006-01-01

    In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group HX∞. A representation of the C*-group algebra of HX∞ is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.

  1. Vector boson fusion in the inert doublet model

    NASA Astrophysics Data System (ADS)

    Dutta, Bhaskar; Palacio, Guillermo; Restrepo, Diego; Ruiz-Álvarez, José D.

    2018-03-01

    In this paper we probe the inert Higgs doublet model at the LHC using vector boson fusion (VBF) search strategy. We optimize the selection cuts and investigate the parameter space of the model and we show that the VBF search has a better reach when compared with the monojet searches. We also investigate the Drell-Yan type cuts and show that they can be important for smaller charged Higgs masses. We determine the 3 σ reach for the parameter space using these optimized cuts for a luminosity of 3000 fb-1 .

  2. Non-lightlike ruled surfaces with constant curvatures in Minkowski 3-space

    NASA Astrophysics Data System (ADS)

    Ali, Ahmad Tawfik

    We study the non-lightlike ruled surfaces in Minkowski 3-space with non-lightlike base curve c(s) =∫(αt + βn + γb)ds, where t, n, b are the tangent, principal normal and binormal vectors of an arbitrary timelike curve Γ(s). Some important results of flat, minimal, II-minimal and II-flat non-lightlike ruled surfaces are studied. Finally, the following interesting theorem is proved: the only non-zero constant mean curvature (CMC) non-lightlike ruled surface is developable timelike ruled surface generated by binormal vector.

  3. Dual-scale topology optoelectronic processor.

    PubMed

    Marsden, G C; Krishnamoorthy, A V; Esener, S C; Lee, S H

    1991-12-15

    The dual-scale topology optoelectronic processor (D-STOP) is a parallel optoelectronic architecture for matrix algebraic processing. The architecture can be used for matrix-vector multiplication and two types of vector outer product. The computations are performed electronically, which allows multiplication and summation concepts in linear algebra to be generalized to various nonlinear or symbolic operations. This generalization permits the application of D-STOP to many computational problems. The architecture uses a minimum number of optical transmitters, which thereby reduces fabrication requirements while maintaining area-efficient electronics. The necessary optical interconnections are space invariant, minimizing space-bandwidth requirements.

  4. Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.

    PubMed

    Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng

    2018-01-01

    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

  5. A method to determine fault vectors in 4H-SiC from stacking sequences observed on high resolution transmission electron microscopy images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Fangzhen; Wang, Huanhuan; Raghothamachar, Balaji

    A new method has been developed to determine the fault vectors associated with stacking faults in 4H-SiC from their stacking sequences observed on high resolution TEM images. This method, analogous to the Burgers circuit technique for determination of dislocation Burgers vector, involves determination of the vectors required in the projection of the perfect lattice to correct the deviated path constructed in the faulted material. Results for several different stacking faults were compared with fault vectors determined from X-ray topographic contrast analysis and were found to be consistent. This technique is expected to applicable to all structures comprising corner shared tetrahedra.

  6. Eisenhart lifts and symmetries of time-dependent systems

    NASA Astrophysics Data System (ADS)

    Cariglia, M.; Duval, C.; Gibbons, G. W.; Horváthy, P. A.

    2016-10-01

    Certain dissipative systems, such as Caldirola and Kannai's damped simple harmonic oscillator, may be modelled by time-dependent Lagrangian and hence time dependent Hamiltonian systems with n degrees of freedom. In this paper we treat these systems, their projective and conformal symmetries as well as their quantisation from the point of view of the Eisenhart lift to a Bargmann spacetime in n + 2 dimensions, equipped with its covariantly constant null Killing vector field. Reparametrisation of the time variable corresponds to conformal rescalings of the Bargmann metric. We show how the Arnold map lifts to Bargmann spacetime. We contrast the greater generality of the Caldirola-Kannai approach with that of Arnold and Bateman. At the level of quantum mechanics, we are able to show how the relevant Schrödinger equation emerges naturally using the techniques of quantum field theory in curved spacetimes, since a covariantly constant null Killing vector field gives rise to well defined one particle Hilbert space. Time-dependent Lagrangians arise naturally also in cosmology and give rise to the phenomenon of Hubble friction. We provide an account of this for Friedmann-Lemaître and Bianchi cosmologies and how it fits in with our previous discussion in the non-relativistic limit.

  7. A variational reconstruction method for undersampled dynamic x-ray tomography based on physical motion models

    NASA Astrophysics Data System (ADS)

    Burger, Martin; Dirks, Hendrik; Frerking, Lena; Hauptmann, Andreas; Helin, Tapio; Siltanen, Samuli

    2017-12-01

    In this paper we study the reconstruction of moving object densities from undersampled dynamic x-ray tomography in two dimensions. A particular motivation of this study is to use realistic measurement protocols for practical applications, i.e. we do not assume to have a full Radon transform in each time step, but only projections in few angular directions. This restriction enforces a space-time reconstruction, which we perform by incorporating physical motion models and regularization of motion vectors in a variational framework. The methodology of optical flow, which is one of the most common methods to estimate motion between two images, is utilized to formulate a joint variational model for reconstruction and motion estimation. We provide a basic mathematical analysis of the forward model and the variational model for the image reconstruction. Moreover, we discuss the efficient numerical minimization based on alternating minimizations between images and motion vectors. A variety of results are presented for simulated and real measurement data with different sampling strategy. A key observation is that random sampling combined with our model allows reconstructions of similar amount of measurements and quality as a single static reconstruction.

  8. Hidden sector monopole, vector dark matter and dark radiation with Higgs portal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baek, Seungwon; Ko, P.; Park, Wan-Il, E-mail: sbaek1560@gmail.com, E-mail: pko@kias.re.kr, E-mail: wipark@kias.re.kr

    2014-10-01

    We show that the 't Hooft-Polyakov monopole model in the hidden sector with Higgs portal interaction makes a viable dark matter model, where monopole and massive vector dark matter (VDM) are stable due to topological conservation and the unbroken subgroup U(1 {sub X}. We show that, even though observed CMB data requires the dark gauge coupling to be quite small, a right amount of VDM thermal relic can be obtained via s-channel resonant annihilation for the mass of VDM close to or smaller than the half of SM higgs mass, thanks to Higgs portal interaction. Monopole relic density turns outmore » to be several orders of magnitude smaller than the observed dark matter relic density. Direct detection experiments, particularly, the projected XENON1T experiment, may probe the parameter space where the dark Higgs is lighter than ∼< 50 GeV. In addition, the dark photon associated with the unbroken U(1 {sub X} contributes to the radiation energy density at present, giving Δ N{sub eff}{sup ν} ∼ 0.1 as the extra relativistic neutrino species.« less

  9. AOF LTAO mode: reconstruction strategy and first test results

    NASA Astrophysics Data System (ADS)

    Oberti, Sylvain; Kolb, Johann; Le Louarn, Miska; La Penna, Paolo; Madec, Pierre-Yves; Neichel, Benoit; Sauvage, Jean-François; Fusco, Thierry; Donaldson, Robert; Soenke, Christian; Suárez Valles, Marcos; Arsenault, Robin

    2016-07-01

    GALACSI is the Adaptive Optics (AO) system serving the instrument MUSE in the framework of the Adaptive Optics Facility (AOF) project. Its Narrow Field Mode (NFM) is a Laser Tomography AO (LTAO) mode delivering high resolution in the visible across a small Field of View (FoV) of 7.5" diameter around the optical axis. From a reconstruction standpoint, GALACSI NFM intends to optimize the correction on axis by estimating the turbulence in volume via a tomographic process, then projecting the turbulence profile onto one single Deformable Mirror (DM) located in the pupil, close to the ground. In this paper, the laser tomographic reconstruction process is described. Several methods (virtual DM, virtual layer projection) are studied, under the constraint of a single matrix vector multiplication. The pseudo-synthetic interaction matrix model and the LTAO reconstructor design are analysed. Moreover, the reconstruction parameter space is explored, in particular the regularization terms. Furthermore, we present here the strategy to define the modal control basis and split the reconstruction between the Low Order (LO) loop and the High Order (HO) loop. Finally, closed loop performance obtained with a 3D turbulence generator will be analysed with respect to the most relevant system parameters to be tuned.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durrer, Ruth; Tansella, Vittorio, E-mail: ruth.durrer@unige.ch, E-mail: vittorio.tansella@unige.ch

    We derive the contribution to relativistic galaxy number count fluctuations from vector and tensor perturbations within linear perturbation theory. Our result is consistent with the the relativistic corrections to number counts due to scalar perturbation, where the Bardeen potentials are replaced with line-of-sight projection of vector and tensor quantities. Since vector and tensor perturbations do not lead to density fluctuations the standard density term in the number counts is absent. We apply our results to vector perturbations which are induced from scalar perturbations at second order and give numerical estimates of their contributions to the power spectrum of relativistic galaxymore » number counts.« less

  11. Mach's principle: Exact frame-dragging via gravitomagnetism in perturbed Friedmann-Robertson-Walker universes with K=({+-}1,0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmid, Christoph

    We show that there is exact dragging of the axis directions of local inertial frames by a weighted average of the cosmological energy currents via gravitomagnetism for all linear perturbations of all Friedmann-Robertson-Walker (FRW) universes and of Einstein's static closed universe, and for all energy-momentum-stress tensors and in the presence of a cosmological constant. This includes FRW universes arbitrarily close to the Milne Universe and the de Sitter universe. Hence the postulate formulated by Ernst Mach about the physical cause for the time-evolution of inertial axes is shown to hold in general relativity for linear perturbations of FRW universes. -more » The time-evolution of local inertial axes (relative to given local fiducial axes) is given experimentally by the precession angular velocity {omega}-vector{sub gyro} of local gyroscopes, which in turn gives the operational definition of the gravitomagnetic field: B-vector{sub g}{identical_to}-2{omega}-vector{sub gyro}. The gravitomagnetic field is caused by energy currents J-vector{sub {epsilon}} via the momentum constraint, Einstein's G{sup 0-}circumflex{sub i-circumflex} equation, (-{delta}+{mu}{sup 2})A-vector{sub g}=-16{pi}G{sub N}J-vector{sub {epsilon}} with B-vector{sub g}=curl A-vector{sub g}. This equation is analogous to Ampere's law, but it holds for all time-dependent situations. {delta} is the de Rham-Hodge Laplacian, and {delta}=-curl curl for the vorticity sector in Riemannian 3-space. - In the solution for an open universe the 1/r{sup 2}-force of Ampere is replaced by a Yukawa force Y{sub {mu}}(r)=(-d/dr)[(1/R)exp(-{mu}r)], form-identical for FRW backgrounds with K=(-1,0). Here r is the measured geodesic distance from the gyroscope to the cosmological source, and 2{pi}R is the measured circumference of the sphere centered at the gyroscope and going through the source point. The scale of the exponential cutoff is the H-dot radius, where H is the Hubble rate, dot is the derivative with respect to cosmic time, and {mu}{sup 2}=-4(dH/dt). Analogous results hold in closed FRW universes and in Einstein's closed static universe.--We list six fundamental tests for the principle formulated by Mach: all of them are explicitly fulfilled by our solutions.--We show that only energy currents in the toroidal vorticity sector with l=1 can affect the precession of gyroscopes. We show that the harmonic decomposition of toroidal vorticity fields in terms of vector spherical harmonics X-vector{sub lm}{sup -} has radial functions which are form-identical for the 3-sphere, the hyperbolic 3-space, and Euclidean 3-space, and are form-identical with the spherical Bessel-, Neumann-, and Hankel functions. - The Appendix gives the de Rham-Hodge Laplacian on vorticity fields in Riemannian 3-spaces by equations connecting the calculus of differential forms with the curl notation. We also give the derivation the Weitzenboeck formula for the difference between the de Rham-Hodge Laplacian {delta} and the ''rough'' Laplacian {nabla}{sup 2} on vector fields.« less

  12. Do vegetated rooftops attract more mosquitoes? Monitoring disease vector abundance on urban green roofs.

    PubMed

    Wong, Gwendolyn K L; Jim, C Y

    2016-12-15

    Green roof, an increasingly common constituent of urban green infrastructure, can provide multiple ecosystem services and mitigate climate-change and urban-heat-island challenges. Its adoption has been beset by a longstanding preconception of attracting urban pests like mosquitoes. As more cities may become vulnerable to emerging and re-emerging mosquito-borne infectious diseases, the knowledge gap needs to be filled. This study gauges the habitat preference of vector mosquitoes for extensive green roofs vis-à-vis positive and negative control sites in an urban setting. Seven sites in a university campus were selected to represent three experimental treatments: green roofs (GR), ground-level blue-green spaces as positive controls (PC), and bare roofs as negative controls (NC). Mosquito-trapping devices were deployed for a year from March 2015 to 2016. Human-biting mosquito species known to transmit infectious diseases in the region were identified and recorded as target species. Generalized linear models evaluated the effects of site type, season, and weather on vector-mosquito abundance. Our model revealed site type as a significant predictor of vector mosquito abundance, with considerably more vector mosquitoes captured in PC than in GR and NC. Vector abundance was higher in NC than in GR, attributed to the occasional presence of water pools in depressions of roofing membrane after rainfall. Our data also demonstrated seasonal differences in abundance. Weather variables were evaluated to assess human-vector contact risks under different weather conditions. Culex quinquefasciatus, a competent vector of diseases including lymphatic filariasis and West Nile fever, could be the most adaptable species. Our analysis demonstrates that green roofs are not particularly preferred by local vector mosquitoes compared to bare roofs and other urban spaces in a humid subtropical setting. The findings call for a better understanding of vector ecology in diverse urban landscapes to improve disease control efficacy amidst surging urbanization and changing climate. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. A Hybrid Color Space for Skin Detection Using Genetic Algorithm Heuristic Search and Principal Component Analysis Technique

    PubMed Central

    2015-01-01

    Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377

  14. Space-Time Point Pattern Analysis of Flavescence Dorée Epidemic in a Grapevine Field: Disease Progression and Recovery

    PubMed Central

    Maggi, Federico; Bosco, Domenico; Galetto, Luciana; Palmano, Sabrina; Marzachì, Cristina

    2017-01-01

    Analyses of space-time statistical features of a flavescence dorée (FD) epidemic in Vitis vinifera plants are presented. FD spread was surveyed from 2011 to 2015 in a vineyard of 17,500 m2 surface area in the Piemonte region, Italy; count and position of symptomatic plants were used to test the hypothesis of epidemic Complete Spatial Randomness and isotropicity in the space-time static (year-by-year) point pattern measure. Space-time dynamic (year-to-year) point pattern analyses were applied to newly infected and recovered plants to highlight statistics of FD progression and regression over time. Results highlighted point patterns ranging from disperse (at small scales) to aggregated (at large scales) over the years, suggesting that the FD epidemic is characterized by multiscale properties that may depend on infection incidence, vector population, and flight behavior. Dynamic analyses showed moderate preferential progression and regression along rows. Nearly uniform distributions of direction and negative exponential distributions of distance of newly symptomatic and recovered plants relative to existing symptomatic plants highlighted features of vector mobility similar to Brownian motion. These evidences indicate that space-time epidemics modeling should include environmental setting (e.g., vineyard geometry and topography) to capture anisotropicity as well as statistical features of vector flight behavior, plant recovery and susceptibility, and plant mortality. PMID:28111581

  15. IPM CRSP project on tospoviruses and thrips vectors in South and Southeast Asia

    USDA-ARS?s Scientific Manuscript database

    Diseases caused by tospoviruses have become a major threat to a broad range of agricultural and horticultural crops. To date, seventeen different tospoviruses have been characterized and twelve thrips species have been identified as vectors of these viruses. Management of diseases caused by tospovir...

  16. Defining the Role of Alpha-Synuclein in Enteric Dysfunction in Parkinsons Disease

    DTIC Science & Technology

    2017-10-01

    direction. o What were the major goals of the project?  Animal use approvals – accomplished pre-funding  Vector production - 1st round of vector...August 2017. 100% Complete  Vector injections. We injected all animals for the long-term survival group as well as additional subjects for shorter...time points. However, as noted below, the transgene expression seen in these animals was below that which was expected/intended. Thus, we are currently

  17. Thrust vectoring of broad ion beams for spacecraft attitude control

    NASA Technical Reports Server (NTRS)

    Collett, C. R.; King, H. J.

    1973-01-01

    Thrust vectoring is shown to increase the attractiveness of ion thrusters for satellite control applications. Incorporating beam deflection into ion thrusters makes it possible to achieve attitude control without adding any thrusters. Two beam vectoring systems are described that can provide up to 10-deg beam deflection in any azimuth. Both systems have been subjected to extended life tests on a 5-cm thruster which resulted in projected life times of from 7500 to 20,000 hours.

  18. Protection of Military Personnel Against Vector-Borne Diseases: A Review of Collaborative Work of the Australian and US Military Over the Last 30 Years.

    PubMed

    Frances, Stephen P; Edstein, Michael D; Debboun, Mustapha; Shanks, G Dennis

    2016-01-01

    Australian and US military medical services have collaborated since World War II to minimize vector-borne diseases such as malaria, dengue, and scrub typhus. In this review, collaboration over the last 30 years is discussed. The collaborative projects and exchange scientist programs have resulted in mutually beneficial outcomes in the fields of drug development and personal protection measures against vector-borne diseases.

  19. Walsh-Hadamard transform kernel-based feature vector for shot boundary detection.

    PubMed

    Lakshmi, Priya G G; Domnic, S

    2014-12-01

    Video shot boundary detection (SBD) is the first step of video analysis, summarization, indexing, and retrieval. In SBD process, videos are segmented into basic units called shots. In this paper, a new SBD method is proposed using color, edge, texture, and motion strength as vector of features (feature vector). Features are extracted by projecting the frames on selected basis vectors of Walsh-Hadamard transform (WHT) kernel and WHT matrix. After extracting the features, based on the significance of the features, weights are calculated. The weighted features are combined to form a single continuity signal, used as input for Procedure Based shot transition Identification process (PBI). Using the procedure, shot transitions are classified into abrupt and gradual transitions. Experimental results are examined using large-scale test sets provided by the TRECVID 2007, which has evaluated hard cut and gradual transition detection. To evaluate the robustness of the proposed method, the system evaluation is performed. The proposed method yields F1-Score of 97.4% for cut, 78% for gradual, and 96.1% for overall transitions. We have also evaluated the proposed feature vector with support vector machine classifier. The results show that WHT-based features can perform well than the other existing methods. In addition to this, few more video sequences are taken from the Openvideo project and the performance of the proposed method is compared with the recent existing SBD method.

  20. Connecting Coronal Mass Ejections to Their Solar Active Region Sources: Combining Results from the HELCATS and FLARECAST Projects

    NASA Astrophysics Data System (ADS)

    Murray, Sophie A.; Guerra, Jordan A.; Zucca, Pietro; Park, Sung-Hong; Carley, Eoin P.; Gallagher, Peter T.; Vilmer, Nicole; Bothmer, Volker

    2018-04-01

    Coronal mass ejections (CMEs) and other solar eruptive phenomena can be physically linked by combining data from a multitude of ground-based and space-based instruments alongside models; however, this can be challenging for automated operational systems. The EU Framework Package 7 HELCATS project provides catalogues of CME observations and properties from the Heliospheric Imagers on board the two NASA/STEREO spacecraft in order to track the evolution of CMEs in the inner heliosphere. From the main HICAT catalogue of over 2,000 CME detections, an automated algorithm has been developed to connect the CMEs observed by STEREO to any corresponding solar flares and active-region (AR) sources on the solar surface. CME kinematic properties, such as speed and angular width, are compared with AR magnetic field properties, such as magnetic flux, area, and neutral line characteristics. The resulting LOWCAT catalogue is also compared to the extensive AR property database created by the EU Horizon 2020 FLARECAST project, which provides more complex magnetic field parameters derived from vector magnetograms. Initial statistical analysis has been undertaken on the new data to provide insight into the link between flare and CME events, and characteristics of eruptive ARs. Warning thresholds determined from analysis of the evolution of these parameters is shown to be a useful output for operational space weather purposes. Parameters of particular interest for further analysis include total unsigned flux, vertical current, and current helicity. The automated method developed to create the LOWCAT catalogue may also be useful for future efforts to develop operational CME forecasting.

  1. Vector-averaged gravity does not alter acetylcholine receptor single channel properties

    NASA Technical Reports Server (NTRS)

    Reitstetter, R.; Gruener, R.

    1994-01-01

    To examine the physiological sensitivity of membrane receptors to altered gravity, we examined the single channel properties of the acetylcholine receptor (AChR), in co-cultures of Xenopus myocytes and neurons, to vector-averaged gravity in the clinostat. This experimental paradigm produces an environment in which, from the cell's perspective, the gravitational vector is "nulled" by continuous averaging. In that respect, the clinostat simulates one aspect of space microgravity where the gravity force is greatly reduced. After clinorotation, the AChR channel mean open-time and conductance were statistically not different from control values but showed a rotation-dependent trend that suggests a process of cellular adaptation to clinorotation. These findings therefore suggest that the ACHR channel function may not be affected in the microgravity of space despite changes in the receptor's cellular organization.

  2. Test spaces and characterizations of quadratic spaces

    NASA Astrophysics Data System (ADS)

    Dvurečenskij, Anatolij

    1996-10-01

    We show that a test space consisting of nonzero vectors of a quadratic space E and of the set all maximal orthogonal systems in E is algebraic iff E is Dacey or, equivalently, iff E is orthomodular. In addition, we present another orthomodularity criteria of quadratic spaces, and using the result of Solèr, we show that they can imply that E is a real, complex, or quaternionic Hilbert space.

  3. Blending Velocities In Task Space In Computing Robot Motions

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.

    1995-01-01

    Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.

  4. Sample levitation and melt in microgravity

    NASA Technical Reports Server (NTRS)

    Moynihan, Philip I. (Inventor)

    1990-01-01

    A system is described for maintaining a sample material in a molten state and away from the walls of a container in a microgravity environment, as in a space vehicle. A plurality of sources of electromagnetic radiation, such as an infrared wavelength, are spaced about the object, with the total net electromagnetic radiation applied to the object being sufficient to maintain it in a molten state, and with the vector sum of the applied radiation being in a direction to maintain the sample close to a predetermined location away from the walls of a container surrounding the sample. For a processing system in a space vehicle that orbits the Earth, the net radiation vector is opposite the velocity of the orbiting vehicle.

  5. Sample levitation and melt in microgravity

    NASA Technical Reports Server (NTRS)

    Moynihan, Philip I. (Inventor)

    1987-01-01

    A system is described for maintaining a sample material in a molten state and away from the walls of a container in a microgravity environment, as in a space vehicle. A plurality of sources of electromagnetic radiation, such as of an infrared wavelength, are spaced about the object, with the total net electromagnetic radiation applied to the object being sufficient to maintain it in a molten state, and with the vector sum of the applied radiation being in a direction to maintain the sample close to a predetermined location away from the walls of a container surrounding the sample. For a processing system in a space vehicle that orbits the Earth, the net radiation vector is opposite the velocity of the orbiting vehicle.

  6. Exclusive vector meson production with leading neutrons in a saturation model for the dipole amplitude in mixed space

    NASA Astrophysics Data System (ADS)

    Amaral, J. T.; Becker, V. M.

    2018-05-01

    We investigate ρ vector meson production in e p collisions at HERA with leading neutrons in the dipole formalism. The interaction of the dipole and the pion is described in a mixed-space approach, in which the dipole-pion scattering amplitude is given by the Marquet-Peschanski-Soyez saturation model, which is based on the traveling wave solutions of the nonlinear Balitsky-Kovchegov equation. We estimate the magnitude of the absorption effects and compare our results with a previous analysis of the same process in full coordinate space. In contrast with this approach, the present study leads to absorption K factors in the range of those predicted by previous theoretical studies on semi-inclusive processes.

  7. Supersymmetric dS/CFT

    NASA Astrophysics Data System (ADS)

    Hertog, Thomas; Tartaglino-Mazzucchelli, Gabriele; Van Riet, Thomas; Venken, Gerben

    2018-02-01

    We put forward new explicit realisations of dS/CFT that relate N = 2 supersymmetric Euclidean vector models with reversed spin-statistics in three dimensions to specific supersymmetric Vasiliev theories in four-dimensional de Sitter space. The partition function of the free supersymmetric vector model deformed by a range of low spin deformations that preserve supersymmetry appears to specify a well-defined wave function with asymptotic de Sitter boundary conditions in the bulk. In particular we find the wave function is globally peaked at undeformed de Sitter space, with a low amplitude for strong deformations. This suggests that supersymmetric de Sitter space is stable in higher-spin gravity and in particular free from ghosts. We speculate this is a limiting case of the de Sitter realizations in exotic string theories.

  8. Interacting vector fields in relativity without relativity

    NASA Astrophysics Data System (ADS)

    Anderson, Edward; Barbour, Julian

    2002-06-01

    Barbour, Foster and Ó Murchadha have recently developed a new framework, called here the 3-space approach, for the formulation of classical bosonic dynamics. Neither time nor a locally Minkowskian structure of spacetime are presupposed. Both arise as emergent features of the world from geodesic-type dynamics on a space of three-dimensional metric-matter configurations. In fact gravity, the universal light-cone and Abelian gauge theory minimally coupled to gravity all arise naturally through a single common mechanism. It yields relativity - and more - without presupposing relativity. This paper completes the recovery of the presently known bosonic sector within the 3-space approach. We show, for a rather general ansatz, that 3-vector fields can interact among themselves only as Yang-Mills fields minimally coupled to gravity.

  9. Intertwined Hamiltonians in two-dimensional curved spaces

    NASA Astrophysics Data System (ADS)

    Aghababaei Samani, Keivan; Zarei, Mina

    2005-04-01

    The problem of intertwined Hamiltonians in two-dimensional curved spaces is investigated. Explicit results are obtained for Euclidean plane, Minkowski plane, Poincaré half plane (AdS2), de Sitter plane (dS2), sphere, and torus. It is shown that the intertwining operator is related to the Killing vector fields and the isometry group of corresponding space. It is shown that the intertwined potentials are closely connected to the integral curves of the Killing vector fields. Two problems are considered as applications of the formalism presented in the paper. The first one is the problem of Hamiltonians with equispaced energy levels and the second one is the problem of Hamiltonians whose spectrum is like the spectrum of a free particle.

  10. Regular and Chaotic Spatial Distribution of Bose-Einstein Condensed Atoms in a Ratchet Potential

    NASA Astrophysics Data System (ADS)

    Li, Fei; Xu, Lan; Li, Wenwu

    2018-02-01

    We study the regular and chaotic spatial distribution of Bose-Einstein condensed atoms with a space-dependent nonlinear interaction in a ratchet potential. There exists in the system a space-dependent atomic current that can be tuned via Feshbach resonance technique. In the presence of the space-dependent atomic current and a weak ratchet potential, the Smale-horseshoe chaos is studied and the Melnikov chaotic criterion is obtained. Numerical simulations show that the ratio between the intensities of optical potentials forming the ratchet potential, the wave vector of the laser producing the ratchet potential or the wave vector of the modulating laser can be chosen as the controlling parameters to result in or avoid chaotic spatial distributional states.

  11. A static investigation of the thrust vectoring system of the F/A-18 high-alpha research vehicle

    NASA Technical Reports Server (NTRS)

    Mason, Mary L.; Capone, Francis J.; Asbury, Scott C.

    1992-01-01

    A static (wind-off) test was conducted in the static test facility of the Langley 16-foot Transonic Tunnel to evaluate the vectoring capability and isolated nozzle performance of the proposed thrust vectoring system of the F/A-18 high alpha research vehicle (HARV). The thrust vectoring system consisted of three asymmetrically spaced vanes installed externally on a single test nozzle. Two nozzle configurations were tested: A maximum afterburner-power nozzle and a military-power nozzle. Vane size and vane actuation geometry were investigated, and an extensive matrix of vane deflection angles was tested. The nozzle pressure ratios ranged from two to six. The results indicate that the three vane system can successfully generate multiaxis (pitch and yaw) thrust vectoring. However, large resultant vector angles incurred large thrust losses. Resultant vector angles were always lower than the vane deflection angles. The maximum thrust vectoring angles achieved for the military-power nozzle were larger than the angles achieved for the maximum afterburner-power nozzle.

  12. Development of 4D mathematical observer models for the task-based evaluation of gated myocardial perfusion SPECT

    NASA Astrophysics Data System (ADS)

    Lee, Taek-Soo; Frey, Eric C.; Tsui, Benjamin M. W.

    2015-04-01

    This paper presents two 4D mathematical observer models for the detection of motion defects in 4D gated medical images. Their performance was compared with results from human observers in detecting a regional motion abnormality in simulated 4D gated myocardial perfusion (MP) SPECT images. The first 4D mathematical observer model extends the conventional channelized Hotelling observer (CHO) based on a set of 2D spatial channels and the second is a proposed model that uses a set of 4D space-time channels. Simulated projection data were generated using the 4D NURBS-based cardiac-torso (NCAT) phantom with 16 gates/cardiac cycle. The activity distribution modelled uptake of 99mTc MIBI with normal perfusion and a regional wall motion defect. An analytical projector was used in the simulation and the filtered backprojection (FBP) algorithm was used in image reconstruction followed by spatial and temporal low-pass filtering with various cut-off frequencies. Then, we extracted 2D image slices from each time frame and reorganized them into a set of cine images. For the first model, we applied 2D spatial channels to the cine images and generated a set of feature vectors that were stacked for the images from different slices of the heart. The process was repeated for each of the 1,024 noise realizations, and CHO and receiver operating characteristics (ROC) analysis methodologies were applied to the ensemble of the feature vectors to compute areas under the ROC curves (AUCs). For the second model, a set of 4D space-time channels was developed and applied to the sets of cine images to produce space-time feature vectors to which the CHO methodology was applied. The AUC values of the second model showed better agreement (Spearman’s rank correlation (SRC) coefficient = 0.8) to human observer results than those from the first model (SRC coefficient = 0.4). The agreement with human observers indicates the proposed 4D mathematical observer model provides a good predictor of the performance of human observers in detecting regional motion defects in 4D gated MP SPECT images. The result supports the use of the observer model in the optimization and evaluation of 4D image reconstruction and compensation methods for improving the detection of motion abnormalities in 4D gated MP SPECT images.

  13. Evaluating Middle School Students' Spatial-scientific Performance in Earth-space Science

    NASA Astrophysics Data System (ADS)

    Wilhelm, Jennifer; Jackson, C.; Toland, M. D.; Cole, M.; Wilhelm, R. J.

    2013-06-01

    Many astronomical concepts cannot be understood without a developed understanding of four spatial-mathematics domains defined as follows: a) Geometric Spatial Visualization (GSV) - Visualizing the geometric features of a system as it appears above, below, and within the system’s plane; b) Spatial Projection (SP) - Projecting to a different location and visualizing from that global perspective; c) Cardinal Directions (CD) - Distinguishing directions (N, S, E, W) in order to document an object’s vector position in space; and d) Periodic Patterns - (PP) Recognizing occurrences at regular intervals of time and/or space. For this study, differences were examined between groups of sixth grade students’ spatial-scientific development pre/post implementation of an Earth/Space unit. Treatment teachers employed a NASA-based curriculum (Realistic Explorations in Astronomical Learning), while control teachers implemented their regular Earth/Space units. A 2-level hierarchical linear model was used to evaluate student performance on the Lunar Phases Concept Inventory (LPCI) and four spatial-mathematics domains, while controlling for two variables (gender and ethnicity) at the student level and one variable (teaching experience) at the teacher level. Overall LPCI results show pre-test scores predicted post-test scores, boys performed better than girls, and Whites performed better than non-Whites. We also compared experimental and control groups’ by spatial-mathematics domain outcomes. For GSV, it was found that boys, in general, tended to have higher GSV post-scores. For domains CD and SP, no statistically significant differences were observed. PP results show Whites performed better than non-Whites. Also for PP, a significant cross-level interaction term (gender-treatment) was observed, which means differences in control and experimental groups are dependent on students’ gender. These findings can be interpreted as: (a) the experimental girls scored higher than the control girls and/or (b) the control group displayed a gender gap in favor of boys while no gender gap was displayed within the experimental group.

  14. Transgene expression in target-defined neuron populations mediated by retrograde infection with adeno-associated viral vectors.

    PubMed

    Rothermel, Markus; Brunert, Daniela; Zabawa, Christine; Díaz-Quesada, Marta; Wachowiak, Matt

    2013-09-18

    Tools enabling the manipulation of well defined neuronal subpopulations are critical for probing complex neuronal networks. Cre recombinase (Cre) mouse driver lines in combination with the Cre-dependent expression of proteins using viral vectors--in particular, recombinant adeno-associated viral vectors (rAAVs)--have emerged as a widely used platform for achieving transgene expression in specified neural populations. However, the ability of rAAVs to further specify neuronal subsets on the basis of their anatomical connectivity has been reported as limited or inconsistent. Here, we systematically tested a variety of widely used neurotropic rAAVs for their ability to mediate retrograde gene transduction in the mouse brain. We tested pseudotyped rAAVs of several common serotypes (rAAV 2/1, 2/5, and 2/9) as well as constructs both with and without Cre-dependent expression switches. Many of the rAAVs tested--in particular, though not exclusively, Cre-dependent vectors--showed a robust capacity for retrograde infection and transgene expression. Retrograde expression was successful over distances as large as 6 mm and in multiple neuron types, including olfactory projection neurons, neocortical pyramidal cells projecting to distinct targets, and corticofugal and modulatory projection neurons. Retrograde infection using transgenes such as ChR2 allowed for optical control or optically assisted electrophysiological identification of neurons defined genetically as well as by their projection target. These results establish a widely accessible tool for achieving combinatorial specificity and stable, long-term transgene expression to isolate precisely defined neuron populations in the intact animal.

  15. Construction of fusion vectors of corynebacteria: expression of glutathione-S-transferase fusion protein in Corynebacterium acetoacidophilum ATCC 21476.

    PubMed

    Srivastava, Preeti; Deb, J K

    2002-07-02

    A series of fusion vectors containing glutathione-S-transferase (GST) were constructed by inserting GST fusion cassette of Escherichia coli vectors pGEX4T-1, -2 and -3 in corynebacterial vector pBK2. Efficient expression of GST driven by inducible tac promoter of E. coli was observed in Corynebacterium acetoacidophilum. Fusion of enhanced green fluorescent protein (EGFP) and streptokinase genes in this vector resulted in the synthesis of both the fusion proteins. The ability of this recombinant organism to produce several-fold more of the product in the extracellular medium than in the intracellular space would make this system quite attractive as far as the downstream processing of the product is concerned.

  16. An analysis of random projection for changeable and privacy-preserving biometric verification.

    PubMed

    Wang, Yongjin; Plataniotis, Konstantinos N

    2010-10-01

    Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.

  17. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  18. Robust and Efficient Spin Purification for Determinantal Configuration Interaction.

    PubMed

    Fales, B Scott; Hohenstein, Edward G; Levine, Benjamin G

    2017-09-12

    The limited precision of floating point arithmetic can lead to the qualitative and even catastrophic failure of quantum chemical algorithms, especially when high accuracy solutions are sought. For example, numerical errors accumulated while solving for determinantal configuration interaction wave functions via Davidson diagonalization may lead to spin contamination in the trial subspace. This spin contamination may cause the procedure to converge to roots with undesired ⟨Ŝ 2 ⟩, wasting computer time in the best case and leading to incorrect conclusions in the worst. In hopes of finding a suitable remedy, we investigate five purification schemes for ensuring that the eigenvectors have the desired ⟨Ŝ 2 ⟩. These schemes are based on projection, penalty, and iterative approaches. All of these schemes rely on a direct, graphics processing unit-accelerated algorithm for calculating the S 2 c matrix-vector product. We assess the computational cost and convergence behavior of these methods by application to several benchmark systems and find that the first-order spin penalty method is the optimal choice, though first-order and Löwdin projection approaches also provide fast convergence to the desired spin state. Finally, to demonstrate the utility of these approaches, we computed the lowest several excited states of an open-shell silver cluster (Ag 19 ) using the state-averaged complete active space self-consistent field method, where spin purification was required to ensure spin stability of the CI vector coefficients. Several low-lying states with significant multiply excited character are predicted, suggesting the value of a multireference approach for modeling plasmonic nanomaterials.

  19. An exact solution of the van der Waals interaction between two ground-state hydrogen atoms

    NASA Astrophysics Data System (ADS)

    Koga, Toshikatsu; Matsumoto, Shinya

    1985-06-01

    A momentum space treatment shows that perturbation equations for the H(1s)-H(1s) van der Waals interaction can be exactly solved in their Schrödinger forms without invoking any variational methods. Using the Fock transformation, which projects the momentum vector of an electron from the three-dimensional hyperplane onto the four-dimensional hypersphere, we solve the third order integral-type perturbation equation with respect to the reciprocal of the internuclear distance R. An exact third order wave function is found as a linear combination of infinite number of four-dimensional spherical harmonics. The result allows us to evaluate the exact dispersion energy E6R-6, which is completely determined by the first three coefficients of the above linear combination.

  20. A nonlinear discriminant algorithm for feature extraction and data classification.

    PubMed

    Santa Cruz, C; Dorronsoro, J R

    1998-01-01

    This paper presents a nonlinear supervised feature extraction algorithm that combines Fisher's criterion function with a preliminary perceptron-like nonlinear projection of vectors in pattern space. Its main motivation is to combine the approximation properties of multilayer perceptrons (MLP's) with the target free nature of Fisher's classical discriminant analysis. In fact, although MLP's provide good classifiers for many problems, there may be some situations, such as unequal class sizes with a high degree of pattern mixing among them, that may make difficult the construction of good MLP classifiers. In these instances, the features extracted by our procedure could be more effective. After the description of its construction and the analysis of its complexity, we will illustrate its use over a synthetic problem with the above characteristics.

  1. First experience of vectorizing electromagnetic physics models for detector simulation

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Bianchini, C.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; de Fine Licht, J.; Duhem, L.; Elvira, D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Presbyterian, M.; Shadura, O.; Seghal, R.; Wenzel, S.

    2015-12-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project.

  2. Hájek-Rényi inequality for m-asymptotically almost negatively associated random vectors in Hilbert space and applications.

    PubMed

    Ko, Mi-Hwa

    2018-01-01

    In this paper, we obtain the Hájek-Rényi inequality and, as an application, we study the strong law of large numbers for H -valued m -asymptotically almost negatively associated random vectors with mixing coefficients [Formula: see text] such that [Formula: see text].

  3. Improved dynamic analysis method using load-dependent Ritz vectors

    NASA Technical Reports Server (NTRS)

    Escobedo-Torres, J.; Ricles, J. M.

    1993-01-01

    The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.

  4. Energy theorem for (2+1)-dimensional gravity.

    NASA Astrophysics Data System (ADS)

    Menotti, P.; Seminara, D.

    1995-05-01

    We prove a positive energy theorem in (2+1)-dimensional gravity for open universes and any matter energy-momentum tensor satisfying the dominant energy condition. We consider on the space-like initial value surface a family of widening Wilson loops and show that the energy-momentum of the enclosed subsystem is a future directed time-like vector whose mass is an increasing function of the loop, until it reaches the value 1/4G corresponding to a deficit angle of 2π. At this point the energy-momentum of the system evolves, depending on the nature of a zero norm vector appearing in the evolution equations, either into a time-like vector of a universe which closes kinematically or into a Gott-like universe whose energy momentum vector, as first recognized by Deser, Jackiw, and 't Hooft (1984) is space-like. This treatment generalizes results obtained by Carroll, Fahri, Guth, and Olum (1994) for a system of point-like spinless particle, to the most general form of matter whose energy-momentum tensor satisfies the dominant energy condition. The treatment is also given for the anti-de Sitter (2+1)-dimensional gravity.

  5. Discriminant analysis for fast multiclass data classification through regularized kernel function approximation.

    PubMed

    Ghorai, Santanu; Mukherjee, Anirban; Dutta, Pranab K

    2010-06-01

    In this brief we have proposed the multiclass data classification by computationally inexpensive discriminant analysis through vector-valued regularized kernel function approximation (VVRKFA). VVRKFA being an extension of fast regularized kernel function approximation (FRKFA), provides the vector-valued response at single step. The VVRKFA finds a linear operator and a bias vector by using a reduced kernel that maps a pattern from feature space into the low dimensional label space. The classification of patterns is carried out in this low dimensional label subspace. A test pattern is classified depending on its proximity to class centroids. The effectiveness of the proposed method is experimentally verified and compared with multiclass support vector machine (SVM) on several benchmark data sets as well as on gene microarray data for multi-category cancer classification. The results indicate the significant improvement in both training and testing time compared to that of multiclass SVM with comparable testing accuracy principally in large data sets. Experiments in this brief also serve as comparison of performance of VVRKFA with stratified random sampling and sub-sampling.

  6. Use of Mapping and Spatial and Space-Time Modeling Approaches in Operational Control of Aedes aegypti and Dengue

    PubMed Central

    Eisen, Lars; Lozano-Fuentes, Saul

    2009-01-01

    The aims of this review paper are to 1) provide an overview of how mapping and spatial and space-time modeling approaches have been used to date to visualize and analyze mosquito vector and epidemiologic data for dengue; and 2) discuss the potential for these approaches to be included as routine activities in operational vector and dengue control programs. Geographical information system (GIS) software are becoming more user-friendly and now are complemented by free mapping software that provide access to satellite imagery and basic feature-making tools and have the capacity to generate static maps as well as dynamic time-series maps. Our challenge is now to move beyond the research arena by transferring mapping and GIS technologies and spatial statistical analysis techniques in user-friendly packages to operational vector and dengue control programs. This will enable control programs to, for example, generate risk maps for exposure to dengue virus, develop Priority Area Classifications for vector control, and explore socioeconomic associations with dengue risk. PMID:19399163

  7. Cosmology in generalized Proca theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Antonio De; Mukohyama, Shinji; Heisenberg, Lavinia

    2016-06-01

    We consider a massive vector field with derivative interactions that propagates only the 3 desired polarizations (besides two tensor polarizations from gravity) with second-order equations of motion in curved space-time. The cosmological implications of such generalized Proca theories are investigated for both the background and the linear perturbation by taking into account the Lagrangian up to quintic order. In the presence of a matter fluid with a temporal component of the vector field, we derive the background equations of motion and show the existence of de Sitter solutions relevant to the late-time cosmic acceleration. We also obtain conditions for themore » absence of ghosts and Laplacian instabilities of tensor, vector, and scalar perturbations in the small-scale limit. Our results are applied to concrete examples of the general functions in the theory, which encompass vector Galileons as a specific case. In such examples, we show that the de Sitter fixed point is always a stable attractor and study viable parameter spaces in which the no-ghost and stability conditions are satisfied during the cosmic expansion history.« less

  8. Prediction of hourly PM2.5 using a space-time support vector regression model

    NASA Astrophysics Data System (ADS)

    Yang, Wentao; Deng, Min; Xu, Feng; Wang, Hang

    2018-05-01

    Real-time air quality prediction has been an active field of research in atmospheric environmental science. The existing methods of machine learning are widely used to predict pollutant concentrations because of their enhanced ability to handle complex non-linear relationships. However, because pollutant concentration data, as typical geospatial data, also exhibit spatial heterogeneity and spatial dependence, they may violate the assumptions of independent and identically distributed random variables in most of the machine learning methods. As a result, a space-time support vector regression model is proposed to predict hourly PM2.5 concentrations. First, to address spatial heterogeneity, spatial clustering is executed to divide the study area into several homogeneous or quasi-homogeneous subareas. To handle spatial dependence, a Gauss vector weight function is then developed to determine spatial autocorrelation variables as part of the input features. Finally, a local support vector regression model with spatial autocorrelation variables is established for each subarea. Experimental data on PM2.5 concentrations in Beijing are used to verify whether the results of the proposed model are superior to those of other methods.

  9. Integrated Design Analysis and Optimisation of Aircraft Structures (L’Analyse pour la Conception Integree et l’Optimisation des Structures d’Aeronefs)

    DTIC Science & Technology

    1992-02-01

    Division (Code RM) ONERA Office of Aeronautics & Space Technology 29 ave de la Division Leclerc NASA Hq 92320 Chfitillon Washington DC 20546 France United...Vector of thickness variables. V’ = [ t2 ........ tN Vector of thickness changes. AV ’= [rt, 5t2 ......... tNJ TI 7-9 Vector of strain derivatives. F...ds, ds, I d, 1i’,= dt, dr2 ........ dt--N Vector of buckling derivatives. dX d). , dt1 dt2 dtN Then 5F= Vs’i . AV and SX V,’. AV The linearised

  10. A procedure for accurate calibration of the orientation of the three sensors in a vector magnetometer. [at the Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Mcpherron, R. L.

    1977-01-01

    Procedures are described for the calibration of a vector magnetometer of high absolute accuracy. It is assumed that the calibration will be performed in the magnetic test facility of Goddard Space Flight Center (GSFC). The first main section of the report describes the test equipment and facility calibrations required. The second presents procedures for calibrating individual sensors. The third discusses the calibration of the sensor assembly. In a final section recommendations are made to GSFC for modification of the test facility required to carry out the calibration procedures.

  11. Flexible body stability analysis of Space Shuttle ascent flight control system by using lambda matrix solution techniques

    NASA Technical Reports Server (NTRS)

    Bown, R. L.; Christofferson, A.; Lardas, M.; Flanders, H.

    1980-01-01

    A lambda matrix solution technique is being developed to perform an open loop frequency analysis of a high order dynamic system. The procedure evaluates the right and left latent vectors corresponding to the respective latent roots. The latent vectors are used to evaluate the partial fraction expansion formulation required to compute the flexible body open loop feedback gains for the Space Shuttle Digital Ascent Flight Control System. The algorithm is in the final stages of development and will be used to insure that the feedback gains meet the design specification.

  12. Vector space methods of photometric analysis - Applications to O stars and interstellar reddening

    NASA Technical Reports Server (NTRS)

    Massa, D.; Lillie, C. F.

    1978-01-01

    A multivariate vector-space formulation of photometry is developed which accounts for error propagation. An analysis of uvby and H-beta photometry of O stars is presented, with attention given to observational errors, reddening, general uvby photometry, early stars, and models of O stars. The number of observable parameters in O-star continua is investigated, the way these quantities compare with model-atmosphere predictions is considered, and an interstellar reddening law is derived. It is suggested that photospheric expansion affects the formation of the continuum in at least some O stars.

  13. Experimental Results of Underwater Cooperative Source Localization Using a Single Acoustic Vector Sensor

    PubMed Central

    Felisberto, Paulo; Rodriguez, Orlando; Santos, Paulo; Ey, Emanuel; Jesus, Sérgio M.

    2013-01-01

    This paper aims at estimating the azimuth, range and depth of a cooperative broadband acoustic source with a single vector sensor in a multipath underwater environment, where the received signal is assumed to be a linear combination of echoes of the source emitted waveform. A vector sensor is a device that measures the scalar acoustic pressure field and the vectorial acoustic particle velocity field at a single location in space. The amplitudes of the echoes in the vector sensor components allow one to determine their azimuth and elevation. Assuming that the environmental conditions of the channel are known, source range and depth are obtained from the estimates of elevation and relative time delays of the different echoes using a ray-based backpropagation algorithm. The proposed method is tested using simulated data and is further applied to experimental data from the Makai'05 experiment, where 8–14 kHz chirp signals were acquired by a vector sensor array. It is shown that for short ranges, the position of the source is estimated in agreement with the geometry of the experiment. The method is low computational demanding, thus well-suited to be used in mobile and light platforms, where space and power requirements are limited. PMID:23857257

  14. Short-interval SMS wind vector determinations for a severe local storms area

    NASA Technical Reports Server (NTRS)

    Peslen, C. A.

    1980-01-01

    Short-interval SMS-2 visible digital image data are used to derive wind vectors from cloud tracking on time-lapsed sequences of geosynchronous satellite images. The cloud tracking areas are located in the Central Plains, where on May 6, 1975 hail-producing thunderstorms occurred ahead of a well defined dry line. Cloud tracking is performed on the Goddard Space Flight Center Atmospheric and Oceanographic Information Processing System. Lower tropospheric cumulus tracers are selected with the assistance of a cloud-top height algorithm. Divergence is derived from the cloud motions using a modified Cressman (1959) objective analysis technique which is designed to organize irregularly spaced wind vectors into uniformly gridded wind fields. The results demonstrate the feasibility of using satellite-derived wind vectors and their associated divergence fields in describing the conditions preceding severe local storm development. For this case, an area of convergence appeared ahead of the dry line and coincided with the developing area of severe weather. The magnitude of the maximum convergence varied between -10 to the -5th and -10 to the -14th per sec. The number of satellite-derived wind vectors which were required to describe conditions of the low-level atmosphere was adequate before numerous cumulonimbus cells formed. This technique is limited in areas of advanced convection.

  15. Spatially-Explicit Simulation Modeling of Ecological Response to Climate Change: Methodological Considerations in Predicting Shifting Population Dynamics of Infectious Disease Vectors.

    PubMed

    Dhingra, Radhika; Jimenez, Violeta; Chang, Howard H; Gambhir, Manoj; Fu, Joshua S; Liu, Yang; Remais, Justin V

    2013-09-01

    Poikilothermic disease vectors can respond to altered climates through spatial changes in both population size and phenology. Quantitative descriptors to characterize, analyze and visualize these dynamic responses are lacking, particularly across large spatial domains. In order to demonstrate the value of a spatially explicit, dynamic modeling approach, we assessed spatial changes in the population dynamics of Ixodes scapularis , the Lyme disease vector, using a temperature-forced population model simulated across a grid of 4 × 4 km cells covering the eastern United States, using both modeled (Weather Research and Forecasting (WRF) 3.2.1) baseline/current (2001-2004) and projected (Representative Concentration Pathway (RCP) 4.5 and RCP 8.5; 2057-2059) climate data. Ten dynamic population features (DPFs) were derived from simulated populations and analyzed spatially to characterize the regional population response to current and future climate across the domain. Each DPF under the current climate was assessed for its ability to discriminate observed Lyme disease risk and known vector presence/absence, using data from the US Centers for Disease Control and Prevention. Peak vector population and month of peak vector population were the DPFs that performed best as predictors of current Lyme disease risk. When examined under baseline and projected climate scenarios, the spatial and temporal distributions of DPFs shift and the seasonal cycle of key questing life stages is compressed under some scenarios. Our results demonstrate the utility of spatial characterization, analysis and visualization of dynamic population responses-including altered phenology-of disease vectors to altered climate.

  16. Spatially-Explicit Simulation Modeling of Ecological Response to Climate Change: Methodological Considerations in Predicting Shifting Population Dynamics of Infectious Disease Vectors

    PubMed Central

    Dhingra, Radhika; Jimenez, Violeta; Chang, Howard H.; Gambhir, Manoj; Fu, Joshua S.; Liu, Yang; Remais, Justin V.

    2014-01-01

    Poikilothermic disease vectors can respond to altered climates through spatial changes in both population size and phenology. Quantitative descriptors to characterize, analyze and visualize these dynamic responses are lacking, particularly across large spatial domains. In order to demonstrate the value of a spatially explicit, dynamic modeling approach, we assessed spatial changes in the population dynamics of Ixodes scapularis, the Lyme disease vector, using a temperature-forced population model simulated across a grid of 4 × 4 km cells covering the eastern United States, using both modeled (Weather Research and Forecasting (WRF) 3.2.1) baseline/current (2001–2004) and projected (Representative Concentration Pathway (RCP) 4.5 and RCP 8.5; 2057–2059) climate data. Ten dynamic population features (DPFs) were derived from simulated populations and analyzed spatially to characterize the regional population response to current and future climate across the domain. Each DPF under the current climate was assessed for its ability to discriminate observed Lyme disease risk and known vector presence/absence, using data from the US Centers for Disease Control and Prevention. Peak vector population and month of peak vector population were the DPFs that performed best as predictors of current Lyme disease risk. When examined under baseline and projected climate scenarios, the spatial and temporal distributions of DPFs shift and the seasonal cycle of key questing life stages is compressed under some scenarios. Our results demonstrate the utility of spatial characterization, analysis and visualization of dynamic population responses—including altered phenology—of disease vectors to altered climate. PMID:24772388

  17. A mapping of an ensemble of mitochondrial sequences for various organisms into 3D space based on the word composition.

    PubMed

    Aita, Takuyo; Nishigaki, Koichi

    2012-11-01

    To visualize a bird's-eye view of an ensemble of mitochondrial genome sequences for various species, we recently developed a novel method of mapping a biological sequence ensemble into Three-Dimensional (3D) vector space. First, we represented a biological sequence of a species s by a word-composition vector x(s), where its length [absolute value]x(s)[absolute value] represents the sequence length, and its unit vector x(s)/[absolute value]x(s)[absolute value] represents the relative composition of the K-tuple words through the sequence and the size of the dimension, N=4(K), is the number of all possible words with the length of K. Second, we mapped the vector x(s) to the 3D position vector y(s), based on the two following simple principles: (1) [absolute value]y(s)[absolute value]=[absolute value]x(s)[absolute value] and (2) the angle between y(s) and y(t) maximally correlates with the angle between x(s) and x(t). The mitochondrial genome sequences for 311 species, including 177 Animalia, 85 Fungi and 49 Green plants, were mapped into 3D space by using K=7. The mapping was successful because the angles between vectors before and after the mapping highly correlated with each other (correlation coefficients were 0.92-0.97). Interestingly, the Animalia kingdom is distributed along a single arc belt (just like the Milky Way on a Celestial Globe), and the Fungi and Green plant kingdoms are distributed in a similar arc belt. These two arc belts intersect at their respective middle regions and form a cross structure just like a jet aircraft fuselage and its wings. This new mapping method will allow researchers to intuitively interpret the visual information presented in the maps in a highly effective manner. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Macroscopic theory of dark sector

    NASA Astrophysics Data System (ADS)

    Meierovich, Boris

    A simple Lagrangian with squared covariant divergence of a vector field as a kinetic term turned out an adequate tool for macroscopic description of the dark sector. The zero-mass field acts as the dark energy. Its energy-momentum tensor is a simple additive to the cosmological constant [1]. Space-like and time-like massive vector fields describe two different forms of dark matter. The space-like massive vector field is attractive. It is responsible for the observed plateau in galaxy rotation curves [2]. The time-like massive field displays repulsive elasticity. In balance with dark energy and ordinary matter it provides a four parametric diversity of regular solutions of the Einstein equations describing different possible cosmological and oscillating non-singular scenarios of evolution of the universe [3]. In particular, the singular big bang turns into a regular inflation-like transition from contraction to expansion with the accelerate expansion at late times. The fine-tuned Friedman-Robertson-Walker singular solution corresponds to the particular limiting case at the boundary of existence of regular oscillating solutions in the absence of vector fields. The simplicity of the general covariant expression for the energy-momentum tensor allows to analyse the main properties of the dark sector analytically and avoid unnecessary model assumptions. It opens a possibility to trace how the additional attraction of the space-like dark matter, dominating in the galaxy scale, transforms into the elastic repulsion of the time-like dark matter, dominating in the scale of the Universe. 1. B. E. Meierovich. "Vector fields in multidimensional cosmology". Phys. Rev. D 84, 064037 (2011). 2. B. E. Meierovich. "Galaxy rotation curves driven by massive vector fields: Key to the theory of the dark sector". Phys. Rev. D 87, 103510, (2013). 3. B. E. Meierovich. "Towards the theory of the evolution of the Universe". Phys. Rev. D 85, 123544 (2012).

  19. GLOBE Observer Mosquito Habitat Mapper: Geoscience and Public Health Connections

    NASA Astrophysics Data System (ADS)

    Low, R.; Boger, R. A.

    2017-12-01

    The global health crisis posed by vector-borne diseases is so great in scope that it is clearly insurmountable without the active help of tens-or hundreds- of thousands of individuals, working to identify and eradicate risk in communities around the world. Mobile devices equipped with data collection capabilities and visualization opportunities are lowering the barrier for participation in data collection efforts. The GLOBE Observer Mosquito Habitat Mapper (MHM) provides citizen scientists with an easy to use mobile platform to identify and locate mosquito breeding sites in their community. The app also supports the identification of vector taxa in the larvae development phase via a built-in key, which provides important information for scientists and public health officials tracking the rate of range expansion of invasive vector species and associated health threats. GO Mosquito is actively working with other citizen scientist programs across the world to ensure interoperability of data through standardization of metadata fields specific to vector monitoring, and through the development of APIs that allow for data exchange and shared data display through a UN-sponsored proof of concept project, Global Mosquito Alert. Avenues of application for mosquito vector data-both directly, by public health entities, and by modelers who employ remotely sensed environmental data to project mosquito population dynamics and epidemic disease will be featured.

  20. Cost of standard indoor ultra-low-volume space spraying as a method to control adult dengue vectors.

    PubMed

    Ditsuwan, Thanittha; Liabsuetrakul, Tippawan; Ditsuwan, Vallop; Thammapalo, Suwich

    2012-06-01

    To access the costs of standard indoor ultra-low-volume (SID-ULV) space spraying for controlling dengue vectors in Thailand. Resources related to SID-ULV space spraying as a method to control dengue vectors between July and December 2009 were identified, measured and valued taking a societal perspective into consideration. Information on costs was collected from direct observations, interviews and bookkeeping records. Uncertainty of unit costs was investigated using a bootstrap technique. Costs of SID-ULV were calculated from 18 new dengue cases that covered 1492 surrounding houses. The average coverage of the SID-ULV was 64.4%. In the first round of spraying, 53% of target houses were sprayed and 44.6% in the second round, of which 69.2% and 54.7% received entire indoor space spraying. Unit costs per case, per 10 houses and per 100 m(2) were USD 705 (95% Confidence Interval CI, 539-888), 180 (95% CI, 150-212) and USD 23 (95% CI, 17-30). The majority of SID-ULV unit cost per case was attributed to productivity loss (83.9%) and recurrent costs (15.2%). The unit cost of the SID-ULV per case and per house in rural was 2.8 and 1.6 times lower than municipal area. The estimated annual cost of SID-ULV space spraying from 2005 to 2009 using healthcare perspective ranged from USD 5.3 to 10.3 million. The majority of the cost of SID-ULV space spraying was attributed to productivity loss. Potential productivity loss influences the achievement of high coverage, so well-planned SID-ULV space spraying strategies are needed to reduce costs. © 2012 Blackwell Publishing Ltd.

  1. Adaptive-projection intrinsically transformed multivariate empirical mode decomposition in cooperative brain-computer interface applications.

    PubMed

    Hemakom, Apit; Goverdovsky, Valentin; Looney, David; Mandic, Danilo P

    2016-04-13

    An extension to multivariate empirical mode decomposition (MEMD), termed adaptive-projection intrinsically transformed MEMD (APIT-MEMD), is proposed to cater for power imbalances and inter-channel correlations in real-world multichannel data. It is shown that the APIT-MEMD exhibits similar or better performance than MEMD for a large number of projection vectors, whereas it outperforms MEMD for the critical case of a small number of projection vectors within the sifting algorithm. We also employ the noise-assisted APIT-MEMD within our proposed intrinsic multiscale analysis framework and illustrate the advantages of such an approach in notoriously noise-dominated cooperative brain-computer interface (BCI) based on the steady-state visual evoked potentials and the P300 responses. Finally, we show that for a joint cognitive BCI task, the proposed intrinsic multiscale analysis framework improves system performance in terms of the information transfer rate. © 2016 The Author(s).

  2. Multiple-output support vector machine regression with feature selection for arousal/valence space emotion assessment.

    PubMed

    Torres-Valencia, Cristian A; Álvarez, Mauricio A; Orozco-Gutiérrez, Alvaro A

    2014-01-01

    Human emotion recognition (HER) allows the assessment of an affective state of a subject. Until recently, such emotional states were described in terms of discrete emotions, like happiness or contempt. In order to cover a high range of emotions, researchers in the field have introduced different dimensional spaces for emotion description that allow the characterization of affective states in terms of several variables or dimensions that measure distinct aspects of the emotion. One of the most common of such dimensional spaces is the bidimensional Arousal/Valence space. To the best of our knowledge, all HER systems so far have modelled independently, the dimensions in these dimensional spaces. In this paper, we study the effect of modelling the output dimensions simultaneously and show experimentally the advantages in modeling them in this way. We consider a multimodal approach by including features from the Electroencephalogram and a few physiological signals. For modelling the multiple outputs, we employ a multiple output regressor based on support vector machines. We also include an stage of feature selection that is developed within an embedded approach known as Recursive Feature Elimination (RFE), proposed initially for SVM. The results show that several features can be eliminated using the multiple output support vector regressor with RFE without affecting the performance of the regressor. From the analysis of the features selected in smaller subsets via RFE, it can be observed that the signals that are more informative into the arousal and valence space discrimination are the EEG, Electrooculogram/Electromiogram (EOG/EMG) and the Galvanic Skin Response (GSR).

  3. Using Grid Cells for Navigation

    PubMed Central

    Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil

    2015-01-01

    Summary Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this “vector navigation” relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. PMID:26247860

  4. Application of vector analysis on study of illuminated area and Doppler characteristics of airborne pulse radar

    NASA Astrophysics Data System (ADS)

    Wang, Haijiang; Yang, Ling

    2014-12-01

    In this paper, the application of vector analysis tool in the illuminated area and the Doppler frequency distribution research for the airborne pulse radar is studied. An important feature of vector analysis is that it can closely combine the geometric ideas with algebraic calculations. Through coordinate transform, the relationship between the frame of radar antenna and the ground, under aircraft motion attitude, is derived. Under the time-space analysis, the overlap area between the footprint of radar beam and the pulse-illuminated zone is obtained. Furthermore, the Doppler frequency expression is successfully deduced. In addition, the Doppler frequency distribution is plotted finally. Using the time-space analysis results, some important parameters of a specified airborne radar system are obtained. Simultaneously, the results are applied to correct the phase error brought by attitude change in airborne synthetic aperture radar (SAR) imaging.

  5. Product demand forecasts using wavelet kernel support vector machine and particle swarm optimization in manufacture system

    NASA Astrophysics Data System (ADS)

    Wu, Qi

    2010-03-01

    Demand forecasts play a crucial role in supply chain management. The future demand for a certain product is the basis for the respective replenishment systems. Aiming at demand series with small samples, seasonal character, nonlinearity, randomicity and fuzziness, the existing support vector kernel does not approach the random curve of the sales time series in the space (quadratic continuous integral space). In this paper, we present a hybrid intelligent system combining the wavelet kernel support vector machine and particle swarm optimization for demand forecasting. The results of application in car sale series forecasting show that the forecasting approach based on the hybrid PSOWv-SVM model is effective and feasible, the comparison between the method proposed in this paper and other ones is also given, which proves that this method is, for the discussed example, better than hybrid PSOv-SVM and other traditional methods.

  6. Cosmology and accelerator tests of strongly interacting dark matter

    DOE PAGES

    Berlin, Asher; Blinov, Nikita; Gori, Stefania; ...

    2018-03-23

    A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less

  7. Precomputed state dependent digital control of a nuclear rocket engine

    NASA Technical Reports Server (NTRS)

    Johnson, M. R.

    1972-01-01

    A control method applicable to multiple-input multiple-output nonlinear time-invariant systems in which desired behavior can be expressed explicitly as a trajectory in system state space is developed. The precomputed state dependent control method is basically a synthesis technique in which a suboptimal control law is developed off-line, prior to system operation. This law is obtained by conducting searches at a finite number of points in state space, in the vicinity of some desired trajectory, to obtain a set of constant control vectors which tend to return the system to the desired trajectory. These vectors are used to evaluate the unknown coefficients in a control law having an assumed hyperellipsoidal form. The resulting coefficients constitute the heart of the controller and are used in the on-line computation of control vectors. Two examples of PSDC are given prior to the more detailed description of the NERVA control system development.

  8. Cosmology and accelerator tests of strongly interacting dark matter

    NASA Astrophysics Data System (ADS)

    Berlin, Asher; Blinov, Nikita; Gori, Stefania; Schuster, Philip; Toro, Natalia

    2018-03-01

    A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experiments such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. We also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.

  9. Cosmology and accelerator tests of strongly interacting dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlin, Asher; Blinov, Nikita; Gori, Stefania

    A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less

  10. Ecological Niche Modelling Predicts Southward Expansion of Lutzomyia (Nyssomyia) flaviscutellata (Diptera: Psychodidae: Phlebotominae), Vector of Leishmania (Leishmania) amazonensis in South America, under Climate Change

    PubMed Central

    Carvalho, Bruno M.; Ready, Paul D.

    2015-01-01

    Vector borne diseases are susceptible to climate change because distributions and densities of many vectors are climate driven. The Amazon region is endemic for cutaneous leishmaniasis and is predicted to be severely impacted by climate change. Recent records suggest that the distributions of Lutzomyia (Nyssomyia) flaviscutellata and the parasite it transmits, Leishmania (Leishmania) amazonensis, are expanding southward, possibly due to climate change, and sometimes associated with new human infection cases. We define the vector’s climatic niche and explore future projections under climate change scenarios. Vector occurrence records were compiled from the literature, museum collections and Brazilian Health Departments. Six bioclimatic variables were used as predictors in six ecological niche model algorithms (BIOCLIM, DOMAIN, MaxEnt, GARP, logistic regression and Random Forest). Projections for 2050 used 17 general circulation models in two greenhouse gas representative concentration pathways: “stabilization” and “high increase”. Ensemble models and consensus maps were produced by overlapping binary predictions. Final model outputs showed good performance and significance. The use of species absence data substantially improved model performance. Currently, L. flaviscutellata is widely distributed in the Amazon region, with records in the Atlantic Forest and savannah regions of Central Brazil. Future projections indicate expansion of the climatically suitable area for the vector in both scenarios, towards higher latitudes and elevations. L. flaviscutellata is likely to find increasingly suitable conditions for its expansion into areas where human population size and density are much larger than they are in its current locations. If environmental conditions change as predicted, the range of the vector is likely to expand to southeastern and central-southern Brazil, eastern Paraguay and further into the Amazonian areas of Bolivia, Peru, Ecuador, Colombia and Venezuela. These areas will only become endemic for L. amazonensis, however, if they have competent reservoir hosts and transmission dynamics matching those in the Amazon region. PMID:26619186

  11. Vector and Raster Data Storage Based on Morton Code

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Pan, Q.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Liu, X.

    2018-05-01

    Even though geomatique is so developed nowadays, the integration of spatial data in vector and raster formats is still a very tricky problem in geographic information system environment. And there is still not a proper way to solve the problem. This article proposes a method to interpret vector data and raster data. In this paper, we saved the image data and building vector data of Guilin University of Technology to Oracle database. Then we use ADO interface to connect database to Visual C++ and convert row and column numbers of raster data and X Y of vector data to Morton code in Visual C++ environment. This method stores vector and raster data to Oracle Database and uses Morton code instead of row and column and X Y to mark the position information of vector and raster data. Using Morton code to mark geographic information enables storage of data make full use of storage space, simultaneous analysis of vector and raster data more efficient and visualization of vector and raster more intuitive. This method is very helpful for some situations that need to analyse or display vector data and raster data at the same time.

  12. Optimizing interplanetary trajectories with deep space maneuvers. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Navagh, John

    1993-01-01

    Analysis of interplanetary trajectories is a crucial area for both manned and unmanned missions of the Space Exploration Initiative. A deep space maneuver (DSM) can improve a trajectory in much the same way as a planetary swingby. However, instead of using a gravitational field to alter the trajectory, the on-board propulsion system of the spacecraft is used when the vehicle is not near a planet. The purpose is to develop an algorithm to determine where and when to use deep space maneuvers to reduce the cost of a trajectory. The approach taken to solve this problem uses primer vector theory in combination with a non-linear optimizing program to minimize Delta(V). A set of necessary conditions on the primer vector is shown to indicate whether a deep space maneuver will be beneficial. Deep space maneuvers are applied to a round trip mission to Mars to determine their effect on the launch opportunities. Other studies which were performed include cycler trajectories and Mars mission abort scenarios. It was found that the software developed was able to locate quickly DSM's which lower the total Delta(V) on these trajectories.

  13. Optimizing interplanetary trajectories with deep space maneuvers

    NASA Astrophysics Data System (ADS)

    Navagh, John

    1993-09-01

    Analysis of interplanetary trajectories is a crucial area for both manned and unmanned missions of the Space Exploration Initiative. A deep space maneuver (DSM) can improve a trajectory in much the same way as a planetary swingby. However, instead of using a gravitational field to alter the trajectory, the on-board propulsion system of the spacecraft is used when the vehicle is not near a planet. The purpose is to develop an algorithm to determine where and when to use deep space maneuvers to reduce the cost of a trajectory. The approach taken to solve this problem uses primer vector theory in combination with a non-linear optimizing program to minimize Delta(V). A set of necessary conditions on the primer vector is shown to indicate whether a deep space maneuver will be beneficial. Deep space maneuvers are applied to a round trip mission to Mars to determine their effect on the launch opportunities. Other studies which were performed include cycler trajectories and Mars mission abort scenarios. It was found that the software developed was able to locate quickly DSM's which lower the total Delta(V) on these trajectories.

  14. Nonlinear optimization with linear constraints using a projection method

    NASA Technical Reports Server (NTRS)

    Fox, T.

    1982-01-01

    Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.

  15. INTERIM ANALYSIS OF THE CONTRIBUTION OF HIGH-LEVEL EVIDENCE FOR DENGUE VECTOR CONTROL.

    PubMed

    Horstick, Olaf; Ranzinger, Silvia Runge

    2015-01-01

    This interim analysis reviews the available systematic literature for dengue vector control on three levels: 1) single and combined vector control methods, with existing work on peridomestic space spraying and on Bacillus thuringiensis israelensis; further work is available soon on the use of Temephos, Copepods and larvivorous fish; 2) or for a specific purpose, like outbreak control, and 3) on a strategic level, as for example decentralization vs centralization, with a systematic review on vector control organization. Clear best practice guidelines for methodology of entomological studies are needed. There is a need to include measuring dengue transmission data. The following recommendations emerge: Although vector control can be effective, implementation remains an issue; Single interventions are probably not useful; Combinations of interventions have mixed results; Careful implementation of vector control measures may be most important; Outbreak interventions are often applied with questionable effectiveness.

  16. Axial vector Z‧ and anomaly cancellation

    NASA Astrophysics Data System (ADS)

    Ismail, Ahmed; Keung, Wai-Yee; Tsao, Kuo-Hsing; Unwin, James

    2017-05-01

    Whilst the prospect of new Z‧ gauge bosons with only axial couplings to the Standard Model (SM) fermions is widely discussed, examples of anomaly-free renormalisable models are lacking in the literature. We look to remedy this by constructing several motivated examples. Specifically, we consider axial vectors which couple universally to all SM fermions, as well as those which are generation-specific, leptophilic, and leptophobic. Anomaly cancellation typically requires the presence of new coloured and charged chiral fermions, and we argue that in a large class of models masses of these new states are expected to be comparable to that of the axial vector. Finally, an axial vector mediator could provide a portal between SM and hidden sector states, and we also consider the possibility that the axial vector couples to dark matter. If the dark matter relic density is set due to freeze-out via the axial vector, this strongly constrains the parameter space.

  17. Hamiltonian indices and rational spectral densities

    NASA Technical Reports Server (NTRS)

    Byrnes, C. I.; Duncan, T. E.

    1980-01-01

    Several (global) topological properties of various spaces of linear systems, particularly symmetric, lossless, and Hamiltonian systems, and multivariable spectral densities of fixed McMillan degree are announced. The study is motivated by a result asserting that on a connected but not simply connected manifold, it is not possible to find a vector field having a sink as its only critical point. In the scalar case, this is illustrated by showing that only on the space of McMillan degree = /Cauchy index/ = n, scalar transfer functions can one define a globally convergent vector field. This result holds both in discrete-time and for the nonautonomous case. With these motivations in mind, theorems of Bochner and Fogarty are used in showing that spaces of transfer functions defined by symmetry conditions are, in fact, smooth algebraic manifolds.

  18. Principal Components Analysis Studies of Martian Clouds

    NASA Astrophysics Data System (ADS)

    Klassen, D. R.; Bell, J. F., III

    2001-11-01

    We present the principal components analysis (PCA) of absolutely calibrated multi-spectral images of Mars as a function of Martian season. The PCA technique is a mathematical rotation and translation of the data from a brightness/wavelength space to a vector space of principal ``traits'' that lie along the directions of maximal variance. The first of these traits, accounting for over 90% of the data variance, is overall brightness and represented by an average Mars spectrum. Interpretation of the remaining traits, which account for the remaining ~10% of the variance, is not always the same and depends upon what other components are in the scene and thus, varies with Martian season. For example, during seasons with large amounts of water ice in the scene, the second trait correlates with the ice and anti-corrlates with temperature. We will investigate the interpretation of the second, and successive important PCA traits. Although these PCA traits are orthogonal in their own vector space, it is unlikely that any one trait represents a singular, mineralogic, spectral end-member. It is more likely that there are many spectral endmembers that vary identically to within the noise level, that the PCA technique will not be able to distinguish them. Another possibility is that similar absorption features among spectral endmembers may be tied to one PCA trait, for example ''amount of 2 \\micron\\ absorption''. We thus attempt to extract spectral endmembers by matching linear combinations of the PCA traits to USGS, JHU, and JPL spectral libraries as aquired through the JPL Aster project. The recovered spectral endmembers are then linearly combined to model the multi-spectral image set. We present here the spectral abundance maps of the water ice/frost endmember which allow us to track Martian clouds and ground frosts. This work supported in part through NASA Planetary Astronomy Grant NAG5-6776. All data gathered at the NASA Infrared Telescope Facility in collaboration with the telescope operators and with thanks to the support staff and day crew.

  19. Single Vector Calibration System for Multi-Axis Load Cells and Method for Calibrating a Multi-Axis Load Cell

    NASA Technical Reports Server (NTRS)

    Parker, Peter A. (Inventor)

    2003-01-01

    A single vector calibration system is provided which facilitates the calibration of multi-axis load cells, including wind tunnel force balances. The single vector system provides the capability to calibrate a multi-axis load cell using a single directional load, for example loading solely in the gravitational direction. The system manipulates the load cell in three-dimensional space, while keeping the uni-directional calibration load aligned. The use of a single vector calibration load reduces the set-up time for the multi-axis load combinations needed to generate a complete calibration mathematical model. The system also reduces load application inaccuracies caused by the conventional requirement to generate multiple force vectors. The simplicity of the system reduces calibration time and cost, while simultaneously increasing calibration accuracy.

  20. Process for structural geologic analysis of topography and point data

    DOEpatents

    Eliason, Jay R.; Eliason, Valerie L. C.

    1987-01-01

    A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.

  1. An improved approach to infer protein-protein interaction based on a hierarchical vector space model.

    PubMed

    Zhang, Jiongmin; Jia, Ke; Jia, Jinmeng; Qian, Ying

    2018-04-27

    Comparing and classifying functions of gene products are important in today's biomedical research. The semantic similarity derived from the Gene Ontology (GO) annotation has been regarded as one of the most widely used indicators for protein interaction. Among the various approaches proposed, those based on the vector space model are relatively simple, but their effectiveness is far from satisfying. We propose a Hierarchical Vector Space Model (HVSM) for computing semantic similarity between different genes or their products, which enhances the basic vector space model by introducing the relation between GO terms. Besides the directly annotated terms, HVSM also takes their ancestors and descendants related by "is_a" and "part_of" relations into account. Moreover, HVSM introduces the concept of a Certainty Factor to calibrate the semantic similarity based on the number of terms annotated to genes. To assess the performance of our method, we applied HVSM to Homo sapiens and Saccharomyces cerevisiae protein-protein interaction datasets. Compared with TCSS, Resnik, and other classic similarity measures, HVSM achieved significant improvement for distinguishing positive from negative protein interactions. We also tested its correlation with sequence, EC, and Pfam similarity using online tool CESSM. HVSM showed an improvement of up to 4% compared to TCSS, 8% compared to IntelliGO, 12% compared to basic VSM, 6% compared to Resnik, 8% compared to Lin, 11% compared to Jiang, 8% compared to Schlicker, and 11% compared to SimGIC using AUC scores. CESSM test showed HVSM was comparable to SimGIC, and superior to all other similarity measures in CESSM as well as TCSS. Supplementary information and the software are available at https://github.com/kejia1215/HVSM .

  2. 3D Position and Velocity Vector Computations of Objects Jettisoned from the International Space Station Using Close-Range Photogrammetry Approach

    NASA Technical Reports Server (NTRS)

    Papanyan, Valeri; Oshle, Edward; Adamo, Daniel

    2008-01-01

    Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.

  3. Multiclass Reduced-Set Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Tang, Benyang; Mazzoni, Dominic

    2006-01-01

    There are well-established methods for reducing the number of support vectors in a trained binary support vector machine, often with minimal impact on accuracy. We show how reduced-set methods can be applied to multiclass SVMs made up of several binary SVMs, with significantly better results than reducing each binary SVM independently. Our approach is based on Burges' approach that constructs each reduced-set vector as the pre-image of a vector in kernel space, but we extend this by recomputing the SVM weights and bias optimally using the original SVM objective function. This leads to greater accuracy for a binary reduced-set SVM, and also allows vectors to be 'shared' between multiple binary SVMs for greater multiclass accuracy with fewer reduced-set vectors. We also propose computing pre-images using differential evolution, which we have found to be more robust than gradient descent alone. We show experimental results on a variety of problems and find that this new approach is consistently better than previous multiclass reduced-set methods, sometimes with a dramatic difference.

  4. Methods, systems and apparatus for controlling third harmonic voltage when operating a multi-space machine in an overmodulation region

    DOEpatents

    Perisic, Milun; Kinoshita, Michael H; Ranson, Ray M; Gallegos-Lopez, Gabriel

    2014-06-03

    Methods, system and apparatus are provided for controlling third harmonic voltages when operating a multi-phase machine in an overmodulation region. The multi-phase machine can be, for example, a five-phase machine in a vector controlled motor drive system that includes a five-phase PWM controlled inverter module that drives the five-phase machine. Techniques for overmodulating a reference voltage vector are provided. For example, when the reference voltage vector is determined to be within the overmodulation region, an angle of the reference voltage vector can be modified to generate a reference voltage overmodulation control angle, and a magnitude of the reference voltage vector can be modified, based on the reference voltage overmodulation control angle, to generate a modified magnitude of the reference voltage vector. By modifying the reference voltage vector, voltage command signals that control a five-phase inverter module can be optimized to increase output voltages generated by the five-phase inverter module.

  5. Optical/Infrared Signatures for Space-Based Remote Sensing

    DTIC Science & Technology

    2007-11-01

    Vanderbilt et al., 1985a, 1985b]. So, first linear polarization was introduced, followed by progress toward a full vector theory of polarization ...radiance profiles taken 30 s apart in a view direction orthogonal to the velocity vector , showing considerable structure due to radiance layers in the...6 Figure 3. The northern polar region and locations of the MSX

  6. Assessing Construct Validity Using Multidimensional Item Response Theory.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    The concept of a user-specified validity sector is discussed. The idea of the validity sector combines the work of M. D. Reckase (1986) and R. Shealy and W. Stout (1991). Reckase developed a methodology to represent an item in a multidimensional latent space as a vector. Item vectors are computed using multidimensional item response theory item…

  7. On the Partitioning of Squared Euclidean Distance and Its Applications in Cluster Analysis.

    ERIC Educational Resources Information Center

    Carter, Randy L.; And Others

    1989-01-01

    The partitioning of squared Euclidean--E(sup 2)--distance between two vectors in M-dimensional space into the sum of squared lengths of vectors in mutually orthogonal subspaces is discussed. Applications to specific cluster analysis problems are provided (i.e., to design Monte Carlo studies for performance comparisons of several clustering methods…

  8. Computation of Surface Integrals of Curl Vector Fields

    ERIC Educational Resources Information Center

    Hu, Chenglie

    2007-01-01

    This article presents a way of computing a surface integral when the vector field of the integrand is a curl field. Presented in some advanced calculus textbooks such as [1], the technique, as the author experienced, is simple and applicable. The computation is based on Stokes' theorem in 3-space calculus, and thus provides not only a means to…

  9. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  10. On the dynamical and geometrical symmetries of Keplerian motion

    NASA Astrophysics Data System (ADS)

    Wulfman, Carl E.

    2009-05-01

    The dynamical symmetries of classical, relativistic and quantum-mechanical Kepler systems are considered to arise from geometric symmetries in PQET phase space. To establish their interconnection, the symmetries are related with the aid of a Lie-algebraic extension of Dirac's correspondence principle, a canonical transformation containing a Cunningham-Bateman inversion, and a classical limit involving a preliminary canonical transformation in ET space. The Lie-algebraic extension establishes the conditions under which the uncertainty principle allows the local dynamical symmetry of a quantum-mechanical system to be the same as the geometrical phase-space symmetry of its classical counterpart. The canonical transformation converts Poincaré-invariant free-particle systems into ISO(3,1) invariant relativistic systems whose classical limit produces Keplerian systems. Locally Cartesian relativistic PQET coordinates are converted into a set of eight conjugate position and momentum coordinates whose classical limit contains Fock projective momentum coordinates and the components of Runge-Lenz vectors. The coordinate systems developed via the transformations are those in which the evolution and degeneracy groups of the classical system are generated by Poisson-bracket operators that produce ordinary rotation, translation and hyperbolic motions in phase space. The way in which these define classical Keplerian symmetries and symmetry coordinates is detailed. It is shown that for each value of the energy of a Keplerian system, the Poisson-bracket operators determine two invariant functions of positions and momenta, which together with its regularized Hamiltonian, define the manifold in six-dimensional phase space upon which motions evolve.

  11. Permanent Monitoring of the Reference Point of the 20m Radio Telescope Wettzell

    NASA Technical Reports Server (NTRS)

    Neidhardt, Alexander; Losler, Michael; Eschelbach, Cornelia; Schenk, Andreas

    2010-01-01

    To achieve the goals of the VLBI2010 project and the Global Geodetic Observing System (GGOS), an automated monitoring of the reference points of the various geodetic space techniques, including Very Long Baseline Interferometry (VLBI), is desirable. The resulting permanent monitoring of the local-tie vectors at co-location stations is essential to obtain the sub-millimeter level in the combinations. For this reason a monitoring system was installed at the Geodetic Observatory Wettzell by the Geodetic Institute of the University of Karlsruhe (GIK) to observe the 20m VLBI radio telescope from May to August 2009. A specially developed software from GIK collected data from automated total station measurements, meteorological sensors, and sensors in the telescope monument (e.g., Invar cable data). A real-time visualization directly offered a live view of the measurements during the regular observation operations. Additional scintillometer measurements allowed refraction corrections during the post-processing. This project is one of the first feasibility studies aimed at determining significant deformations of the VLBI antenna due to, for instance, changes in temperature.

  12. Space Shuttle Projects Overview to Columbia Air Forces War College

    NASA Technical Reports Server (NTRS)

    Singer, Jody; McCool, Alex (Technical Monitor)

    2000-01-01

    This paper presents, in viewgraph form, a general overview of space shuttle projects. Some of the topics include: 1) Space Shuttle Projects; 2) Marshall Space Flight Center Space Shuttle Projects Office; 3) Space Shuttle Propulsion systems; 4) Space Shuttle Program Major Sites; 5) NASA Office of Space flight (OSF) Center Roles in Space Shuttle Program; 6) Space Shuttle Hardware Flow; and 7) Shuttle Flights To Date.

  13. PCA-LBG-based algorithms for VQ codebook generation

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Yang, Po-Yuan

    2015-04-01

    Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.

  14. Method and apparatus for in-situ detection and isolation of aircraft engine faults

    NASA Technical Reports Server (NTRS)

    Bonanni, Pierino Gianni (Inventor); Brunell, Brent Jerome (Inventor)

    2007-01-01

    A method for performing a fault estimation based on residuals of detected signals includes determining an operating regime based on a plurality of parameters, extracting predetermined noise standard deviations of the residuals corresponding to the operating regime and scaling the residuals, calculating a magnitude of a measurement vector of the scaled residuals and comparing the magnitude to a decision threshold value, extracting an average, or mean direction and a fault level mapping for each of a plurality of fault types, based on the operating regime, calculating a projection of the measurement vector onto the average direction of each of the plurality of fault types, determining a fault type based on which projection is maximum, and mapping the projection to a continuous-valued fault level using a lookup table.

  15. Project MAGNET High-level Vector Survey Data Reduction

    NASA Technical Reports Server (NTRS)

    Coleman, Rachel J.

    1992-01-01

    Since 1951, the U.S. Navy, under its Project MAGNET program, has been continuously collecting vector aeromagnetic survey data to support the U.S. Defense Mapping Agency's world magnetic and charting program. During this forty-year period, a variety of survey platforms and instrumentation configurations have been used. The current Project MAGNET survey platform is a Navy Orion RP-3D aircraft which has been specially modified and specially equipped with a redundant suite of navigational positioning, attitude, and magnetic sensors. A review of the survey data collection procedures and calibration and editing techniques applied to the data generated by this suite of instrumentation will be presented. Among the topics covered will be the determination of its parameters from the low-level calibration maneuvers flown over geomagnetic observatories.

  16. Modeling Malaria Transmission in Thailand and Indonesia

    NASA Technical Reports Server (NTRS)

    Kiang, Richard; Adimi, Farida; Nigro, Joseph

    2007-01-01

    Malaria Modeling and Surveillance is a project in the NASA Applied Sciences Public Health Applications Program. The main objectives of this project are: 1) identification of the potential breeding sites for major vector species: 2) implementation of a malaria transmission model to identify they key factors that sustain or intensify malaria transmission; and 3) implementation of a risk algorithm to predict the occurrence of malaria and its transmission intensity. Remote sensing and GIs are the essential elements of this project. The NASA Earth science data sets used in this project include AVHRR Pathfinder, TRMM, MODIS, NSIPP and SIESIP. Textural-contextual classifications are used to identify small larval habitats. Neural network methods are used to model malaria cases as a function of precipitation, temperatures, humidity and vegetation. Hindcastings based on these environmental parameters have shown good agreement to epidemiological records. Examples for spatio-temporal modeling of malaria transmissions in Southeast Asia are given. Discrete event simulations were used for modeling the detailed interactions among the vector life cycle, sporogonic cycle and human infection cycle, under the explicit influences of selected extrinsic and intrinsic factors. The output of the model includes the individual infection status and the quantities normally observed in field studies, such as mosquito biting rates, sporozoite infection rates, gametocyte prevalence and incidence. Results are in good agreement with mosquito vector and human malaria data acquired by Coleman et al. over 4.5 years in Kong Mong Tha, a remote village in western Thailand. Application of our models is not restricted to Southeast Asia. The model and techniques are equally applicable to other regions of the world, when appropriate epidemiological and vector ecological parameters are used as input.

  17. A best-fit model for concept vectors in biomedical research grants.

    PubMed

    Johnson, Calvin; Lau, William; Bhandari, Archna; Hays, Timothy

    2008-11-06

    The Research, Condition, and Disease Categorization (RCDC) project was created to standardize budget reporting by research topic. Text mining techniques have been implemented to classify NIH grant applications into proper research and disease categories. A best-fit model is shown to achieve classification performance rivaling that of concept vectors produced by human experts.

  18. Summary of lessons learned from USDA-ARS Area-Wide Asian Tiger Mosquito Management Project

    USDA-ARS?s Scientific Manuscript database

    Aedes albopictus, the Asian tiger mosquito, is the principal vector of chikungunya and a critical vector of dengue viruses. This daytime biting pest is now distributed over much of the eastern quadrant of the continental U.S. all the way north to coastal New York, and often causes the majority of se...

  19. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  20. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-01-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  1. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-06-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  2. Declining Prevalence of Disease Vectors Under Climate Change

    NASA Astrophysics Data System (ADS)

    Escobar, Luis E.; Romero-Alvarez, Daniel; Leon, Renato; Lepe-Lopez, Manuel A.; Craft, Meggan E.; Borbor-Cordova, Mercy J.; Svenning, Jens-Christian

    2016-12-01

    More than half of the world population is at risk of vector-borne diseases including dengue fever, chikungunya, zika, yellow fever, leishmaniasis, chagas disease, and malaria, with highest incidences in tropical regions. In Ecuador, vector-borne diseases are present from coastal and Amazonian regions to the Andes Mountains; however, a detailed characterization of the distribution of their vectors has never been carried out. We estimate the distribution of 14 vectors of the above vector-borne diseases under present-day and future climates. Our results consistently suggest that climate warming is likely threatening some vector species with extinction, locally or completely. These results suggest that climate change could reduce the burden of specific vector species. Other vector species are likely to shift and constrain their geographic range to the highlands in Ecuador potentially affecting novel areas and populations. These forecasts show the need for development of early prevention strategies for vector species currently absent in areas projected as suitable under future climate conditions. Informed interventions could reduce the risk of human exposure to vector species with distributional shifts, in response to current and future climate changes. Based on the mixed effects of future climate on human exposure to disease vectors, we argue that research on vector-borne diseases should be cross-scale and include climatic, demographic, and landscape factors, as well as forces facilitating disease transmission at fine scales.

  3. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    PubMed

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  4. Standardized Metadata for Human Pathogen/Vector Genomic Sequences

    PubMed Central

    Dugan, Vivien G.; Emrich, Scott J.; Giraldo-Calderón, Gloria I.; Harb, Omar S.; Newman, Ruchi M.; Pickett, Brett E.; Schriml, Lynn M.; Stockwell, Timothy B.; Stoeckert, Christian J.; Sullivan, Dan E.; Singh, Indresh; Ward, Doyle V.; Yao, Alison; Zheng, Jie; Barrett, Tanya; Birren, Bruce; Brinkac, Lauren; Bruno, Vincent M.; Caler, Elizabet; Chapman, Sinéad; Collins, Frank H.; Cuomo, Christina A.; Di Francesco, Valentina; Durkin, Scott; Eppinger, Mark; Feldgarden, Michael; Fraser, Claire; Fricke, W. Florian; Giovanni, Maria; Henn, Matthew R.; Hine, Erin; Hotopp, Julie Dunning; Karsch-Mizrachi, Ilene; Kissinger, Jessica C.; Lee, Eun Mi; Mathur, Punam; Mongodin, Emmanuel F.; Murphy, Cheryl I.; Myers, Garry; Neafsey, Daniel E.; Nelson, Karen E.; Nierman, William C.; Puzak, Julia; Rasko, David; Roos, David S.; Sadzewicz, Lisa; Silva, Joana C.; Sobral, Bruno; Squires, R. Burke; Stevens, Rick L.; Tallon, Luke; Tettelin, Herve; Wentworth, David; White, Owen; Will, Rebecca; Wortman, Jennifer; Zhang, Yun; Scheuermann, Richard H.

    2014-01-01

    High throughput sequencing has accelerated the determination of genome sequences for thousands of human infectious disease pathogens and dozens of their vectors. The scale and scope of these data are enabling genotype-phenotype association studies to identify genetic determinants of pathogen virulence and drug/insecticide resistance, and phylogenetic studies to track the origin and spread of disease outbreaks. To maximize the utility of genomic sequences for these purposes, it is essential that metadata about the pathogen/vector isolate characteristics be collected and made available in organized, clear, and consistent formats. Here we report the development of the GSCID/BRC Project and Sample Application Standard, developed by representatives of the Genome Sequencing Centers for Infectious Diseases (GSCIDs), the Bioinformatics Resource Centers (BRCs) for Infectious Diseases, and the U.S. National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health (NIH), informed by interactions with numerous collaborating scientists. It includes mapping to terms from other data standards initiatives, including the Genomic Standards Consortium’s minimal information (MIxS) and NCBI’s BioSample/BioProjects checklists and the Ontology for Biomedical Investigations (OBI). The standard includes data fields about characteristics of the organism or environmental source of the specimen, spatial-temporal information about the specimen isolation event, phenotypic characteristics of the pathogen/vector isolated, and project leadership and support. By modeling metadata fields into an ontology-based semantic framework and reusing existing ontologies and minimum information checklists, the application standard can be extended to support additional project-specific data fields and integrated with other data represented with comparable standards. The use of this metadata standard by all ongoing and future GSCID sequencing projects will provide a consistent representation of these data in the BRC resources and other repositories that leverage these data, allowing investigators to identify relevant genomic sequences and perform comparative genomics analyses that are both statistically meaningful and biologically relevant. PMID:24936976

  5. Standardized metadata for human pathogen/vector genomic sequences.

    PubMed

    Dugan, Vivien G; Emrich, Scott J; Giraldo-Calderón, Gloria I; Harb, Omar S; Newman, Ruchi M; Pickett, Brett E; Schriml, Lynn M; Stockwell, Timothy B; Stoeckert, Christian J; Sullivan, Dan E; Singh, Indresh; Ward, Doyle V; Yao, Alison; Zheng, Jie; Barrett, Tanya; Birren, Bruce; Brinkac, Lauren; Bruno, Vincent M; Caler, Elizabet; Chapman, Sinéad; Collins, Frank H; Cuomo, Christina A; Di Francesco, Valentina; Durkin, Scott; Eppinger, Mark; Feldgarden, Michael; Fraser, Claire; Fricke, W Florian; Giovanni, Maria; Henn, Matthew R; Hine, Erin; Hotopp, Julie Dunning; Karsch-Mizrachi, Ilene; Kissinger, Jessica C; Lee, Eun Mi; Mathur, Punam; Mongodin, Emmanuel F; Murphy, Cheryl I; Myers, Garry; Neafsey, Daniel E; Nelson, Karen E; Nierman, William C; Puzak, Julia; Rasko, David; Roos, David S; Sadzewicz, Lisa; Silva, Joana C; Sobral, Bruno; Squires, R Burke; Stevens, Rick L; Tallon, Luke; Tettelin, Herve; Wentworth, David; White, Owen; Will, Rebecca; Wortman, Jennifer; Zhang, Yun; Scheuermann, Richard H

    2014-01-01

    High throughput sequencing has accelerated the determination of genome sequences for thousands of human infectious disease pathogens and dozens of their vectors. The scale and scope of these data are enabling genotype-phenotype association studies to identify genetic determinants of pathogen virulence and drug/insecticide resistance, and phylogenetic studies to track the origin and spread of disease outbreaks. To maximize the utility of genomic sequences for these purposes, it is essential that metadata about the pathogen/vector isolate characteristics be collected and made available in organized, clear, and consistent formats. Here we report the development of the GSCID/BRC Project and Sample Application Standard, developed by representatives of the Genome Sequencing Centers for Infectious Diseases (GSCIDs), the Bioinformatics Resource Centers (BRCs) for Infectious Diseases, and the U.S. National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health (NIH), informed by interactions with numerous collaborating scientists. It includes mapping to terms from other data standards initiatives, including the Genomic Standards Consortium's minimal information (MIxS) and NCBI's BioSample/BioProjects checklists and the Ontology for Biomedical Investigations (OBI). The standard includes data fields about characteristics of the organism or environmental source of the specimen, spatial-temporal information about the specimen isolation event, phenotypic characteristics of the pathogen/vector isolated, and project leadership and support. By modeling metadata fields into an ontology-based semantic framework and reusing existing ontologies and minimum information checklists, the application standard can be extended to support additional project-specific data fields and integrated with other data represented with comparable standards. The use of this metadata standard by all ongoing and future GSCID sequencing projects will provide a consistent representation of these data in the BRC resources and other repositories that leverage these data, allowing investigators to identify relevant genomic sequences and perform comparative genomics analyses that are both statistically meaningful and biologically relevant.

  6. Vector splines on the sphere with application to the estimation of vorticity and divergence from discrete, noisy data

    NASA Technical Reports Server (NTRS)

    Wahba, G.

    1982-01-01

    Vector smoothing splines on the sphere are defined. Theoretical properties are briefly alluded to. The appropriate Hilbert space norms used in a specific meteorological application are described and justified via a duality theorem. Numerical procedures for computing the splines as well as the cross validation estimate of two smoothing parameters are given. A Monte Carlo study is described which suggests the accuracy with which upper air vorticity and divergence can be estimated using measured wind vectors from the North American radiosonde network.

  7. The effects of vector leptoquark on the ℬb(ℬ = Λ,Σ) →ℬμ+μ- decays

    NASA Astrophysics Data System (ADS)

    Wang, Shuai-Wei; Huang, Jin-Shu

    2016-07-01

    In this paper, we have studied the baryonic semileptonic ℬb(ℬ = Λ, Σ) →ℬμ+μ- decays in the vector leptoquark model with U = (3, 3, 2/3) state. Using the parameters’ space constrained through some well-measured decay modes, such as Bs → μ+μ-, Bs -B¯s mixing and B → K∗μ+μ- decays, we show the effects of vector leptoquark state on the double lepton polarization asymmetries of ℬb(ℬ = Λ, Σ) →ℬμ+μ- decays, and find that the double lepton polarization asymmetries, except for PLL, PLN and PNL, are sensitive to the contributions of vector leptoquark model.

  8. Evidence that implicit assumptions of ‘no evolution’ of disease vectors in changing environments can be violated on a rapid timescale

    PubMed Central

    Egizi, Andrea; Fefferman, Nina H.; Fonseca, Dina M.

    2015-01-01

    Projected impacts of climate change on vector-borne disease dynamics must consider many variables relevant to hosts, vectors and pathogens, including how altered environmental characteristics might affect the spatial distributions of vector species. However, many predictive models for vector distributions consider their habitat requirements to be fixed over relevant time-scales, when they may actually be capable of rapid evolutionary change and even adaptation. We examine the genetic signature of a spatial expansion by an invasive vector into locations with novel temperature conditions compared to its native range as a proxy for how existing vector populations may respond to temporally changing habitat. Specifically, we compare invasions into different climate ranges and characterize the importance of selection from the invaded habitat. We demonstrate that vector species can exhibit evolutionary responses (altered allelic frequencies) to a temperature gradient in as little as 7–10 years even in the presence of high gene flow, and further, that this response varies depending on the strength of selection. We interpret these findings in the context of climate change predictions for vector populations and emphasize the importance of incorporating vector evolution into models of future vector-borne disease dynamics. PMID:25688024

  9. Environmental management: a re-emerging vector control strategy.

    PubMed

    Ault, S K

    1994-01-01

    Vector control may be accomplished by environmental management (EM), which consists of permanent or long-term modification of the environment, temporary or seasonal manipulation of the environment, and modifying or changing our life styles and practices to reduce human contact with infective vectors. The primary focus of this paper is EM in the control of human malaria, filariasis, arboviruses, Chagas' disease, and schistosomiasis. Modern EM developed as a discipline based primarily in ecologic principles and lessons learned from the adverse environmental impacts of rural development projects. Strategies such as the suppression of vector populations through the provision of safe water supplies, proper sanitation, solid waste management facilities, sewerage and excreta disposal systems, water manipulation in dams and irrigation systems, vector diversion by zooprophylaxis, and vector exclusion by improved housing, are discussed with appropriate examples. Vectors of malaria, filariasis, Chagas' disease, and schistosomiasis have been controlled by drainage or filling aquatic breeding sites, improved housing and sanitation, the use of expanded polystyrene beads, zooprophylaxis, or the provision of household water supplies. Community participation has been effective in the suppression of dengue vectors in Mexico and the Dominican Republic. Alone or combined with other vector control methods, EM has been proven to be a successful approach to vector control in a number of places. The future of EM in vector control looks promising.

  10. A meta-classifier for detecting prostate cancer by quantitative integration of in vivo magnetic resonance spectroscopy and magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Viswanath, Satish; Tiwari, Pallavi; Rosen, Mark; Madabhushi, Anant

    2008-03-01

    Recently, in vivo Magnetic Resonance Imaging (MRI) and Magnetic Resonance Spectroscopy (MRS) have emerged as promising new modalities to aid in prostate cancer (CaP) detection. MRI provides anatomic and structural information of the prostate while MRS provides functional data pertaining to biochemical concentrations of metabolites such as creatine, choline and citrate. We have previously presented a hierarchical clustering scheme for CaP detection on in vivo prostate MRS and have recently developed a computer-aided method for CaP detection on in vivo prostate MRI. In this paper we present a novel scheme to develop a meta-classifier to detect CaP in vivo via quantitative integration of multimodal prostate MRS and MRI by use of non-linear dimensionality reduction (NLDR) methods including spectral clustering and locally linear embedding (LLE). Quantitative integration of multimodal image data (MRI and PET) involves the concatenation of image intensities following image registration. However multimodal data integration is non-trivial when the individual modalities include spectral and image intensity data. We propose a data combination solution wherein we project the feature spaces (image intensities and spectral data) associated with each of the modalities into a lower dimensional embedding space via NLDR. NLDR methods preserve the relationships between the objects in the original high dimensional space when projecting them into the reduced low dimensional space. Since the original spectral and image intensity data are divorced from their original physical meaning in the reduced dimensional space, data at the same spatial location can be integrated by concatenating the respective embedding vectors. Unsupervised consensus clustering is then used to partition objects into different classes in the combined MRS and MRI embedding space. Quantitative results of our multimodal computer-aided diagnosis scheme on 16 sets of patient data obtained from the ACRIN trial, for which corresponding histological ground truth for spatial extent of CaP is known, show a marginally higher sensitivity, specificity, and positive predictive value compared to corresponding CAD results with the individual modalities.

  11. Reviving the shear-free perfect fluid conjecture in general relativity

    NASA Astrophysics Data System (ADS)

    Sikhonde, Muzikayise E.; Dunsby, Peter K. S.

    2017-12-01

    Employing a Mathematica symbolic computer algebra package called xTensor, we present (1+3) -covariant special case proofs of the shear-free perfect fluid conjecture in general relativity. We first present the case where the pressure is constant, and where the acceleration is parallel to the vorticity vector. These cases were first presented in their covariant form by Senovilla et al. We then provide a covariant proof for the case where the acceleration and vorticity vectors are orthogonal, which leads to the existence of a Killing vector along the vorticity. This Killing vector satisfies the new constraint equations resulting from the vanishing of the shear. Furthermore, it is shown that in order for the conjecture to be true, this Killing vector must have a vanishing spatially projected directional covariant derivative along the velocity vector field. This in turn implies the existence of another basic vector field along the direction of the vorticity for the conjecture to hold. Finally, we show that in general, there exists a basic vector field parallel to the acceleration for which the conjecture is true.

  12. Bluetongue Disease Risk Assessment Based on Observed and Projected Culicoides obsoletus spp. Vector Densities

    PubMed Central

    Brugger, Katharina; Rubel, Franz

    2013-01-01

    Bluetongue is an arboviral disease of ruminants causing significant economic losses. Our risk assessment is based on the epidemiological key parameter, the basic reproduction number. It is defined as the number of secondary cases caused by one primary case in a fully susceptible host population, in which values greater than one indicate the possibility, i.e., the risk, for a major disease outbreak. In the course of the Bluetongue virus serotype 8 (BTV-8) outbreak in Europe in 2006 we developed such a risk assessment for the University of Veterinary Medicine Vienna, Austria. Basic reproduction numbers were calculated using a well-known formula for vector-borne diseases considering the population densities of hosts (cattle and small ruminants) and vectors (biting midges of the Culicoides obsoletus spp.) as well as temperature dependent rates. The latter comprise the biting and mortality rate of midges as well as the reciprocal of the extrinsic incubation period. Most important, but generally unknown, is the spatio-temporal distribution of the vector density. Therefore, we established a continuously operating daily monitoring to quantify the seasonal cycle of the vector population by a statistical model. We used cross-correlation maps and Poisson regression to describe vector densities by environmental temperature and precipitation. Our results comprise time series of observed and simulated Culicoides obsoletus spp. counts as well as basic reproduction numbers for the period 2009–2011. For a spatio-temporal risk assessment we projected our results from the location of Vienna to the entire region of Austria. We compiled both daily maps of vector densities and the basic reproduction numbers, respectively. Basic reproduction numbers above one were generally found between June and August except in the mountainous regions of the Alps. The highest values coincide with the locations of confirmed BTV cases. PMID:23560090

  13. Group vector space method for estimating enthalpy of vaporization of organic compounds at the normal boiling point.

    PubMed

    Wenying, Wei; Jinyu, Han; Wen, Xu

    2004-01-01

    The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.

  14. Solar monochromatic images in magneto-sensitive spectral lines and maps of vector magnetic fields

    NASA Technical Reports Server (NTRS)

    Shihui, Y.; Jiehai, J.; Minhan, J.

    1985-01-01

    A new method which allows by use of the monochromatic images in some magneto-sensitive spectra line to derive both the magnetic field strength as well as the angle between magnetic field lines and line of sight for various places in solar active regions is described. In this way two dimensional maps of vector magnetic fields may be constructed. This method was applied to some observational material and reasonable results were obtained. In addition, a project for constructing the three dimensional maps of vector magnetic fields was worked out.

  15. Feasibility study of new energy projects on three-level indicator system

    NASA Astrophysics Data System (ADS)

    Zhan, Zhigang

    2018-06-01

    With the rapid development of new energy industry, many new energy development projects are being carried out all over the world. To analyze the feasibility of the project. we build feasibility of new energy projects assessment model, based on the gathered abundant data about progress in new energy projects.12 indicators are selected by principal component analysis(PCA). Then we construct a new three-level indicator system, where the first level has 1 indicator, the second level has 5 indicators and the third level has 12 indicators to evaluate. Moreover, we use the entropy weight method (EWM) to get weight vector of the indicators in the third level and the multivariate statistical analysis(MVA)to get the weight vector of indicators in the second-class. We use this evaluation model to evaluate the feasibility of the new energy project and make a reference for the subsequent new energy investment. This could be a contribution to the world's low-carbon and green development by investing in sustainable new energy projects. We will introduce new variables and improve the weight model in the future. We also conduct a sensitivity analysis of the model and illustrate the strengths and weaknesses.

  16. Stennis Space Center Environmental Geographic Information System

    NASA Technical Reports Server (NTRS)

    Lovely, Janette; Cohan, Tyrus

    2000-01-01

    As NASA's lead center for rocket propulsion testing, the John C. Stennis Space Center (SSC) monitors and assesses the off-site impacts of such testing through its Environmental Office (SSC-EO) using acoustical models and ancillary data. The SSC-EO has developed a geographical database, called the SSC Environmental Geographic Information System (SSC-EGIS), that covers an eight-county area bordering the NASA facility. Through the SSC-EGIS, the Enivronmental Office inventories, assesses, and manages the nearly 139,000 acres that comprise Stennis Space Center and its surrounding acoustical buffer zone. The SSC-EGIS contains in-house data as well as a wide range of data obtained from outside sources, including private agencies and local, county, state, and U.S. government agencies. The database comprises cadastral/geodetic, hydrology, infrastructure, geo-political, physical geography, and socio-economic vector and raster layers. The imagery contained in the database is varied, including low-resolution imagery, such as Landsat TM and SPOT; high-resolution imagery, such as IKONOS and AVIRIS; and aerial photographs. The SSC-EGIS has been an integral part of several major projects and the model upon which similar EGIS's will be developed for other NASA facilities. The Corps of Engineers utilized the SSC-EGIS in a plan to establish wetland mitigation sites within the SSC buffer zone. Mississippi State University employed the SSC-EGIS in a preliminary study to evaluate public access points within the buffer zone. The SSC-EO has also expressly used the SSC-EGIS to assess noise pollution modeling, land management/wetland mitigation assessment, environmental hazards mapping, and protected areas mapping for archaeological sites and for threatened and endangered species habitats. The SSC-EO has several active and planned projects that will also make use of the SSC-EGIS during this and the coming fiscal year.

  17. Attributed graph distance measure for automatic detection of attention deficit hyperactive disordered subjects.

    PubMed

    Dey, Soumyabrata; Rao, A Ravishankar; Shah, Mubarak

    2014-01-01

    Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention recently for two reasons. First, it is one of the most commonly found childhood disorders and second, the root cause of the problem is still unknown. Functional Magnetic Resonance Imaging (fMRI) data has become a popular tool for the analysis of ADHD, which is the focus of our current research. In this paper we propose a novel framework for the automatic classification of the ADHD subjects using their resting state fMRI (rs-fMRI) data of the brain. We construct brain functional connectivity networks for all the subjects. The nodes of the network are constructed with clusters of highly active voxels and edges between any pair of nodes represent the correlations between their average fMRI time series. The activity level of the voxels are measured based on the average power of their corresponding fMRI time-series. For each node of the networks, a local descriptor comprising of a set of attributes of the node is computed. Next, the Multi-Dimensional Scaling (MDS) technique is used to project all the subjects from the unknown graph-space to a low dimensional space based on their inter-graph distance measures. Finally, the Support Vector Machine (SVM) classifier is used on the low dimensional projected space for automatic classification of the ADHD subjects. Exhaustive experimental validation of the proposed method is performed using the data set released for the ADHD-200 competition. Our method shows promise as we achieve impressive classification accuracies on the training (70.49%) and test data sets (73.55%). Our results reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects.

  18. HAL/S programmer's guide. [space shuttle flight software language

    NASA Technical Reports Server (NTRS)

    Newbold, P. M.; Hotz, R. L.

    1974-01-01

    HAL/S is a programming language developed to satisfy the flight software requirements for the space shuttle program. The user's guide explains pertinent language operating procedures and described the various HAL/S facilities for manipulating integer, scalar, vector, and matrix data types.

  19. Use of a Closed-Loop Tracking Algorithm for Orientation Bias Determination of an S-Band Ground Station

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.; Piasecki, Marie T.; Schrage, Dean S.

    2015-01-01

    The Space Communications and Navigation (SCaN) Testbed project completed installation and checkout testing of a new S-Band ground station at the NASA Glenn Research Center in Cleveland, Ohio in 2015. As with all ground stations, a key alignment process must be conducted to obtain offset angles in azimuth (AZ) and elevation (EL). In telescopes with AZ-EL gimbals, this is normally done with a two-star alignment process, where telescope-based pointing vectors are derived from catalogued locations with the AZ-EL bias angles derived from the pointing vector difference. For an antenna, the process is complicated without an optical asset. For the present study, the solution was to utilize the gimbal control algorithms closed-loop tracking capability to acquire the peak received power signal automatically from two distinct NASA Tracking and Data Relay Satellite (TDRS) spacecraft, without a human making the pointing adjustments. Briefly, the TDRS satellite acts as a simulated optical source and the alignment process proceeds exactly the same way as a one-star alignment. The data reduction process, which will be discussed in the paper, results in two bias angles which are retained for future pointing determination. Finally, the paper compares the test results and provides lessons learned from the activity.

  20. Learn the Lagrangian: A Vector-Valued RKHS Approach to Identifying Lagrangian Systems.

    PubMed

    Cheng, Ching-An; Huang, Han-Pang

    2016-12-01

    We study the modeling of Lagrangian systems with multiple degrees of freedom. Based on system dynamics, canonical parametric models require ad hoc derivations and sometimes simplification for a computable solution; on the other hand, due to the lack of prior knowledge in the system's structure, modern nonparametric models in machine learning face the curse of dimensionality, especially in learning large systems. In this paper, we bridge this gap by unifying the theories of Lagrangian systems and vector-valued reproducing kernel Hilbert space. We reformulate Lagrangian systems with kernels that embed the governing Euler-Lagrange equation-the Lagrangian kernels-and show that these kernels span a subspace capturing the Lagrangian's projection as inverse dynamics. By such property, our model uses only inputs and outputs as in machine learning and inherits the structured form as in system dynamics, thereby removing the need for the mundane derivations for new systems as well as the generalization problem in learning from scratches. In effect, it learns the system's Lagrangian, a simpler task than directly learning the dynamics. To demonstrate, we applied the proposed kernel to identify the robot inverse dynamics in simulations and experiments. Our results present a competitive novel approach to identifying Lagrangian systems, despite using only inputs and outputs.

  1. Fiberprint: A subject fingerprint based on sparse code pooling for white matter fiber analysis.

    PubMed

    Kumar, Kuldeep; Desrosiers, Christian; Siddiqi, Kaleem; Colliot, Olivier; Toews, Matthew

    2017-09-01

    White matter characterization studies use the information provided by diffusion magnetic resonance imaging (dMRI) to draw cross-population inferences. However, the structure, function, and white matter geometry vary across individuals. Here, we propose a subject fingerprint, called Fiberprint, to quantify the individual uniqueness in white matter geometry using fiber trajectories. We learn a sparse coding representation for fiber trajectories by mapping them to a common space defined by a dictionary. A subject fingerprint is then generated by applying a pooling function for each bundle, thus providing a vector of bundle-wise features describing a particular subject's white matter geometry. These features encode unique properties of fiber trajectories, such as their density along prominent bundles. An analysis of data from 861 Human Connectome Project subjects reveals that a fingerprint based on approximately 3000 fiber trajectories can uniquely identify exemplars from the same individual. We also use fingerprints for twin/sibling identification, our observations consistent with the twin data studies of white matter integrity. Our results demonstrate that the proposed Fiberprint can effectively capture the variability in white matter fiber geometry across individuals, using a compact feature vector (dimension of 50), making this framework particularly attractive for handling large datasets. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. BOREAS Regional Soils Data in Raster Format and AEAC Projection

    NASA Technical Reports Server (NTRS)

    Monette, Bryan; Knapp, David; Hall, Forrest G. (Editor); Nickeson, Jaime (Editor)

    2000-01-01

    This data set was gridded by BOREAS Information System (BORIS) Staff from a vector data set received from the Canadian Soil Information System (CanSIS). The original data came in two parts that covered Saskatchewan and Manitoba. The data were gridded and merged into one data set of 84 files covering the BOREAS region. The data were gridded into the AEAC projection. Because the mapping of the two provinces was done separately in the original vector data, there may be discontinuities in some of the soil layers because of different interpretations of certain soil properties. The data are stored in binary, image format files.

  3. A new class of N=2 topological amplitudes

    NASA Astrophysics Data System (ADS)

    Antoniadis, I.; Hohenegger, S.; Narain, K. S.; Sokatchev, E.

    2009-12-01

    We describe a new class of N=2 topological amplitudes that compute a particular class of BPS terms in the low energy effective supergravity action. Specifically they compute the coupling F(( where F, λ and ϕ are gauge field strengths, gaugino and holomorphic vector multiplet scalars. The novel feature of these terms is that they depend both on the vector and hypermultiplet moduli. The BPS nature of these terms implies that they satisfy a holomorphicity condition with respect to vector moduli and a harmonicity condition as well as a second order differential equation with respect to hypermultiplet moduli. We study these conditions explicitly in heterotic string theory and show that they are indeed satisfied up to anomalous boundary terms in the world-sheet moduli space. We also analyze the boundary terms in the holomorphicity and harmonicity equations at a generic point in the vector and hyper moduli space. In particular we show that the obstruction to the holomorphicity arises from the one loop threshold correction to the gauge couplings and we argue that this is due to the contribution of non-holomorphic couplings to the connected graphs via elimination of the auxiliary fields.

  4. The Local Stellar Velocity Field via Vector Spherical Harmonics

    NASA Technical Reports Server (NTRS)

    Makarov, V. V.; Murphy, D. W.

    2007-01-01

    We analyze the local field of stellar tangential velocities for a sample of 42,339 nonbinary Hipparcos stars with accurate parallaxes, using a vector spherical harmonic formalism.We derive simple relations between the parameters of the classical linear model (Ogorodnikov-Milne) of the local systemic field and low-degree terms of the general vector harmonic decomposition. Taking advantage of these relationships, we determine the solar velocity with respect to the local stars of (V(sub X), V(sub Y), V(sub Z)) = (10.5, 18.5, 7.3) +/- 0.1 km s(exp -1) not for the asymmetric drift with respect to the local standard of rest. If only stars more distant than 100 pc are considered, the peculiar solar motion is (V(sub X), V(sub Y), V(sub Z)) = (9.9, 15.6, 6.9) +/- 0.2 km s(exp -1). The adverse effects of harmonic leakage, which occurs between the reflex solar motion represented by the three electric vector harmonics in the velocity space and higher degree harmonics in the proper-motion space, are eliminated in our analysis by direct subtraction of the reflex solar velocity in its tangential components for each star...

  5. Characterization Of Ocean Wind Vector Retrievals Using ERS-2 High-Resolution Long-Term Dataset And Buoy Measurements

    NASA Astrophysics Data System (ADS)

    Polverari, F.; Talone, M.; Crapolicchio, R. Levy, G.; Marzano, F.

    2013-12-01

    The European Remote-sensing Satellite (ERS)-2 scatterometer provides wind retrievals over Ocean. To satisfy the needs of high quality and homogeneous set of scatterometer measurements, the European Space Agency (ESA) has developed the project Advanced Scatterometer Processing System (ASPS) with which a long-term dataset of new ERS-2 wind products, with an enhanced resolution of 25km square, has been generated by the reprocessing of the entire ERS mission. This paper presents the main results of the validation work of such new dataset using in situ measurements provided by the Prediction and Research Moored Array in the Tropical Atlantic (PIRATA). The comparison indicates that, on average, the scatterometer data agree well with buoys measurements, however the scatterometer tends to overestimates lower winds and underestimates higher winds.

  6. The Cauchy-Schwarz Inequality and the Induced Metrics on Real Vector Spaces Mainly on the Real Line

    ERIC Educational Resources Information Center

    Ramasinghe, W.

    2005-01-01

    It is very well known that the Cauchy-Schwarz inequality is an important property shared by all inner product spaces and the inner product induces a norm on the space. A proof of the Cauchy-Schwarz inequality for real inner product spaces exists, which does not employ the homogeneous property of the inner product. However, it is shown that a real…

  7. Using the Logarithm of Odds to Define a Vector Space on Probabilistic Atlases

    PubMed Central

    Pohl, Kilian M.; Fisher, John; Bouix, Sylvain; Shenton, Martha; McCarley, Robert W.; Grimson, W. Eric L.; Kikinis, Ron; Wells, William M.

    2007-01-01

    The Logarithm of the Odds ratio (LogOdds) is frequently used in areas such as artificial neural networks, economics, and biology, as an alternative representation of probabilities. Here, we use LogOdds to place probabilistic atlases in a linear vector space. This representation has several useful properties for medical imaging. For example, it not only encodes the shape of multiple anatomical structures but also captures some information concerning uncertainty. We demonstrate that the resulting vector space operations of addition and scalar multiplication have natural probabilistic interpretations. We discuss several examples for placing label maps into the space of LogOdds. First, we relate signed distance maps, a widely used implicit shape representation, to LogOdds and compare it to an alternative that is based on smoothing by spatial Gaussians. We find that the LogOdds approach better preserves shapes in a complex multiple object setting. In the second example, we capture the uncertainty of boundary locations by mapping multiple label maps of the same object into the LogOdds space. Third, we define a framework for non-convex interpolations among atlases that capture different time points in the aging process of a population. We evaluate the accuracy of our representation by generating a deformable shape atlas that captures the variations of anatomical shapes across a population. The deformable atlas is the result of a principal component analysis within the LogOdds space. This atlas is integrated into an existing segmentation approach for MR images. We compare the performance of the resulting implementation in segmenting 20 test cases to a similar approach that uses a more standard shape model that is based on signed distance maps. On this data set, the Bayesian classification model with our new representation outperformed the other approaches in segmenting subcortical structures. PMID:17698403

  8. Use of Bibliometric Analysis to Assess the Scientific Productivity and Impact of the Global Emerging Infections Surveillance and Response System Program, 2006-2012.

    PubMed

    Reaves, Erik J; Valle, Ruben; Chandrasekera, Ruvani M; Soto, Giselle; Burke, Ronald L; Cummings, James F; Bausch, Daniel G; Kasper, Matthew R

    2017-05-01

    Scientific publication in academic literature is a key venue in which the U.S. Department of Defense's Global Emerging Infections Surveillance and Response System (GEIS) program disseminates infectious disease surveillance data. Bibliometric analyses are tools to evaluate scientific productivity and impact of published research, yet are not routinely used for disease surveillance. Our objective was to incorporate bibliometric indicators to measure scientific productivity and impact of GEIS-funded infectious disease surveillance, and assess their utility in the management of the GEIS surveillance program. Metrics on GEIS program scientific publications, project funding, and countries of collaborating institutions from project years 2006 to 2012 were abstracted from annual reports and program databases and organized by the six surveillance priority focus areas: respiratory infections, gastrointestinal infections, febrile and vector-borne infections, antimicrobial resistance, sexually transmitted infections, and capacity building and outbreak response. Scientific productivity was defined as the number of scientific publications in peer-reviewed literature derived from GEIS-funded projects. Impact was defined as the number of citations of a GEIS-funded publication by other peer-reviewed publications, and the Thomson Reuters 2-year journal impact factor. Indicators were retrieved from the Web of Science and Journal Citation Report. To determine the global network of international collaborations between GEIS partners, countries were organized by the locations of collaborating institutions. Between 2006 and 2012, GEIS distributed approximately US $330 million to support 921 total projects. On average, GEIS funded 132 projects (range 96-160) with $47 million (range $43 million-$53 million), annually. The predominant surveillance focus areas were respiratory infections with 317 (34.4%) projects and $225 million, and febrile and vector-borne infections with 274 (29.8%) projects and $45 million. The number of annual respiratory infections-related projects peaked in 2006 and 2009. The number of febrile and vector-borne infections projects increased from 29 projects in 2006 to 58 in 2012. There were 651 articles published in 147 different peer-reviewed journals, with an average Thomson Reuters 2-year journal impact factor of 4.2 (range 0.3-53.5). On average, 93 articles were published per year (range 67-117) with $510,000 per publication. Febrile and vector-borne, respiratory, and gastrointestinal infections had 287, 167, and 73 articles published, respectively. Of the 651 articles published, 585 (89.9%) articles were cited at least once (range 1-1,045). Institutions from 90 countries located in all six World Health Organization regions collaborated with surveillance projects. These findings summarize the GEIS-funded surveillance portfolio between 2006 and 2012, and demonstrate the scientific productivity and impact of the program in each of the six disease surveillance priority focus areas. GEIS might benefit from further financial investment in both the febrile and vector-borne and sexually transmitted infections surveillance priority focus areas and increasing peer-reviewed publications of surveillance data derived from respiratory infections projects. Bibliometric indicators are useful to measure scientific productivity and impact in surveillance systems; and this methodology can be utilized as a management tool to assess future changes to GEIS surveillance priorities. Additional metrics should be developed when peer-reviewed literature is not used to disseminate noteworthy accomplishments. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.

  9. Literature-based concept profiles for gene annotation: the issue of weighting.

    PubMed

    Jelier, Rob; Schuemie, Martijn J; Roes, Peter-Jan; van Mulligen, Erik M; Kors, Jan A

    2008-05-01

    Text-mining has been used to link biomedical concepts, such as genes or biological processes, to each other for annotation purposes or the generation of new hypotheses. To relate two concepts to each other several authors have used the vector space model, as vectors can be compared efficiently and transparently. Using this model, a concept is characterized by a list of associated concepts, together with weights that indicate the strength of the association. The associated concepts in the vectors and their weights are derived from a set of documents linked to the concept of interest. An important issue with this approach is the determination of the weights of the associated concepts. Various schemes have been proposed to determine these weights, but no comparative studies of the different approaches are available. Here we compare several weighting approaches in a large scale classification experiment. Three different techniques were evaluated: (1) weighting based on averaging, an empirical approach; (2) the log likelihood ratio, a test-based measure; (3) the uncertainty coefficient, an information-theory based measure. The weighting schemes were applied in a system that annotates genes with Gene Ontology codes. As the gold standard for our study we used the annotations provided by the Gene Ontology Annotation project. Classification performance was evaluated by means of the receiver operating characteristics (ROC) curve using the area under the curve (AUC) as the measure of performance. All methods performed well with median AUC scores greater than 0.84, and scored considerably higher than a binary approach without any weighting. Especially for the more specific Gene Ontology codes excellent performance was observed. The differences between the methods were small when considering the whole experiment. However, the number of documents that were linked to a concept proved to be an important variable. When larger amounts of texts were available for the generation of the concepts' vectors, the performance of the methods diverged considerably, with the uncertainty coefficient then outperforming the two other methods.

  10. Climate Change and Aedes Vectors: 21st Century Projections for Dengue Transmission in Europe.

    PubMed

    Liu-Helmersson, Jing; Quam, Mikkel; Wilder-Smith, Annelies; Stenlund, Hans; Ebi, Kristie; Massad, Eduardo; Rocklöv, Joacim

    2016-05-01

    Warming temperatures may increase the geographic spread of vector-borne diseases into temperate areas. Although a tropical mosquito-borne viral disease, a dengue outbreak occurred in Madeira, Portugal, in 2012; the first in Europe since 1920s. This outbreak emphasizes the potential for dengue re-emergence in Europe given changing climates. We present estimates of dengue epidemic potential using vectorial capacity (VC) based on historic and projected temperature (1901-2099). VC indicates the vectors' ability to spread disease among humans. We calculated temperature-dependent VC for Europe, highlighting 10 European cities and three non-European reference cities. Compared with the tropics, Europe shows pronounced seasonality and geographical heterogeneity. Although low, VC during summer is currently sufficient for dengue outbreaks in Southern Europe to commence-if sufficient vector populations (either Ae. aegypti and Ae. albopictus) were active and virus were introduced. Under various climate change scenarios, the seasonal peak and time window for dengue epidemic potential increases during the 21st century. Our study maps dengue epidemic potential in Europe and identifies seasonal time windows when major cities are most conducive for dengue transmission from 1901 to 2099. Our findings illustrate, that besides vector control, mitigating greenhouse gas emissions crucially reduces the future epidemic potential of dengue in Europe. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Einstein-aether theory: dynamics of relativistic particles with spin or polarization in a Gödel-type universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balakin, Alexander B.; Popov, Vladimir A., E-mail: alexander.balakin@kpfu.ru, E-mail: vladipopov@mail.ru

    In the framework of the Einstein-aether theory we consider a cosmological model, which describes the evolution of the unit dynamic vector field with activated rotational degree of freedom. We discuss exact solutions of the Einstein-aether theory, for which the space-time is of the Gödel-type, the velocity four-vector of the aether motion is characterized by a non-vanishing vorticity, thus the rotational vectorial modes can be associated with the source of the universe rotation. The main goal of our paper is to study the motion of test relativistic particles with a vectorial internal degree of freedom (spin or polarization), which is coupledmore » to the unit dynamic vector field. The particles are considered as the test ones in the given space-time background of the Gödel-type; the spin (polarization) coupling to the unit dynamic vector field is modeled using exact solutions of three types. The first exact solution describes the aether with arbitrary Jacobson's coupling constants; the second one relates to the case, when the Jacobson's constant responsible for the vorticity is vanishing; the third exact solution is obtained using three constraints for the coupling constants. The analysis of the exact expressions, which are obtained for the particle momentum and for the spin (polarization) four-vector components, shows that the interaction of the spin (polarization) with the unit vector field induces a rotation, which is additional to the geodesic precession of the spin (polarization) associated with the universe rotation as a whole.« less

  12. Linear FMCW Laser Radar for Precision Range and Vector Velocity Measurements

    NASA Technical Reports Server (NTRS)

    Pierrottet, Diego; Amzajerdian, Farzin; Petway, Larry; Barnes, Bruce; Lockhard, George; Rubio, Manuel

    2008-01-01

    An all fiber linear frequency modulated continuous wave (FMCW) coherent laser radar system is under development with a goal to aide NASA s new Space Exploration initiative for manned and robotic missions to the Moon and Mars. By employing a combination of optical heterodyne and linear frequency modulation techniques and utilizing state-of-the-art fiber optic technologies, highly efficient, compact and reliable laser radar suitable for operation in a space environment is being developed. Linear FMCW lidar has the capability of high-resolution range measurements, and when configured into a multi-channel receiver system it has the capability of obtaining high precision horizontal and vertical velocity measurements. Precision range and vector velocity data are beneficial to navigating planetary landing pods to the preselected site and achieving autonomous, safe soft-landing. The all-fiber coherent laser radar has several important advantages over more conventional pulsed laser altimeters or range finders. One of the advantages of the coherent laser radar is its ability to measure directly the platform velocity by extracting the Doppler shift generated from the motion, as opposed to time of flight range finders where terrain features such as hills, cliffs, or slopes add error to the velocity measurement. Doppler measurements are about two orders of magnitude more accurate than the velocity estimates obtained by pulsed laser altimeters. In addition, most of the components of the device are efficient and reliable commercial off-the-shelf fiber optic telecommunication components. This paper discusses the design and performance of a second-generation brassboard system under development at NASA Langley Research Center as part of the Autonomous Landing and Hazard Avoidance (ALHAT) project.

  13. IMPACT: Integrated Modeling of Perturbations in Atmospheres for Conjunction Tracking

    NASA Astrophysics Data System (ADS)

    Koller, J.; Brennan, S.; Godinez, H. C.; Higdon, D. M.; Klimenko, A.; Larsen, B.; Lawrence, E.; Linares, R.; McLaughlin, C. A.; Mehta, P. M.; Palmer, D.; Ridley, A. J.; Shoemaker, M.; Sutton, E.; Thompson, D.; Walker, A.; Wohlberg, B.

    2013-12-01

    Low-Earth orbiting satellites suffer from atmospheric drag due to thermospheric density which changes on the order of several magnitudes especially during space weather events. Solar flares, precipitating particles and ionospheric currents cause the upper atmosphere to heat up, redistribute, and cool again. These processes are intrinsically included in empirical models, e.g. MSIS and Jacchia-Bowman type models. However, sensitivity analysis has shown that atmospheric drag has the highest influence on satellite conjunction analysis and empirical model still do not adequately represent a desired accuracy. Space debris and collision avoidance have become an increasingly operational reality. It is paramount to accurately predict satellite orbits and include drag effect driven by space weather. The IMPACT project (Integrated Modeling of Perturbations in Atmospheres for Conjunction Tracking), funded with over $5 Million by the Los Alamos Laboratory Directed Research and Development office, has the goal to develop an integrated system of atmospheric drag modeling, orbit propagation, and conjunction analysis with detailed uncertainty quantification to address the space debris and collision avoidance problem. Now with over two years into the project, we have developed an integrated solution combining physics-based density modeling of the upper atmosphere between 120-700 km altitude, satellite drag forecasting for quiet and disturbed geomagnetic conditions, and conjunction analysis with non-Gaussian uncertainty quantification. We are employing several novel approaches including a unique observational sensor developed at Los Alamos; machine learning with a support-vector machine approach of the coupling between solar drivers of the upper atmosphere and satellite drag; rigorous data assimilative modeling using a physics-based approach instead of empirical modeling of the thermosphere; and a computed-tomography method for extracting temporal maps of thermospheric densities using ground based observations. The developed IMPACT framework is an open research framework enabling the exchange and testing of a variety of atmospheric density models, orbital propagators, drag coefficient models, ground based observations, etc. and study their effect on conjunctions and uncertainty predictions. The framework is based on a modern service-oriented architecture controlled by a web interface and providing 3D visualizations. The goal of this project is to revolutionize the ability to monitor and track space objects during highly disturbed space weather conditions, provide suitable forecasts for satellite drag conditions and conjunction analysis, and enable the exchange of models, codes, and data in an open research environment. We will present capabilities and results of the IMPACT framework including a demo of the control interface and visualizations.

  14. Wind speed vector restoration algorithm

    NASA Astrophysics Data System (ADS)

    Baranov, Nikolay; Petrov, Gleb; Shiriaev, Ilia

    2018-04-01

    Impulse wind lidar (IWL) signal processing software developed by JSC «BANS» recovers full wind speed vector by radial projections and provides wind parameters information up to 2 km distance. Increasing accuracy and speed of wind parameters calculation signal processing technics have been studied in this research. Measurements results of IWL and continuous scanning lidar were compared. Also, IWL data processing modeling results have been analyzed.

  15. Airborne Evaluation and Demonstration of a Time-Based Airborne Inter-Arrival Spacing Tool

    NASA Technical Reports Server (NTRS)

    Lohr, Gary W.; Oseguera-Lohr, Rosa M.; Abbott, Terence S.; Capron, William R.; Howell, Charles T.

    2005-01-01

    An airborne tool has been developed that allows an aircraft to obtain a precise inter-arrival time-based spacing interval from the preceding aircraft. The Advanced Terminal Area Approach Spacing (ATAAS) tool uses Automatic Dependent Surveillance-Broadcast (ADS-B) data to compute speed commands for the ATAAS-equipped aircraft to obtain this inter-arrival spacing behind another aircraft. The tool was evaluated in an operational environment at the Chicago O'Hare International Airport and in the surrounding terminal area with three participating aircraft flying fixed route area navigation (RNAV) paths and vector scenarios. Both manual and autothrottle speed management were included in the scenarios to demonstrate the ability to use ATAAS with either method of speed management. The results on the overall delivery precision of the tool, based on a target spacing of 90 seconds, were a mean of 90.8 seconds with a standard deviation of 7.7 seconds. The results for the RNAV and vector cases were, respectively, M=89.3, SD=4.9 and M=91.7, SD=9.0.

  16. The role of remote sensing and GIS for spatial prediction of vector-borne diseases transmission: a systematic review.

    PubMed

    Palaniyandi, M

    2012-12-01

    There have been several attempts made to the appreciation of remote sensing and GIS for the study of vectors, biodiversity, vector presence, vector abundance and the vector-borne diseases with respect to space and time. This study was made for reviewing and appraising the potential use of remote sensing and GIS applications for spatial prediction of vector-borne diseases transmission. The nature of the presence and the abundance of vectors and vector-borne diseases, disease infection and the disease transmission are not ubiquitous and are confined with geographical, environmental and climatic factors, and are localized. The presence of vectors and vector-borne diseases is most complex in nature, however, it is confined and fueled by the geographical, climatic and environmental factors including man-made factors. The usefulness of the present day availability of the information derived from the satellite data including vegetation indices of canopy cover and its density, soil types, soil moisture, soil texture, soil depth, etc. is integrating the information in the expert GIS engine for the spatial analysis of other geoclimatic and geoenvironmental variables. The present study gives the detailed information on the classical studies of the past and present, and the future role of remote sensing and GIS for the vector-borne diseases control. The ecological modeling directly gives us the relevant information to understand the spatial variation of the vector biodiversity, vector presence, vector abundance and the vector-borne diseases in association with geoclimatic and the environmental variables. The probability map of the geographical distribution and seasonal variations of horizontal and vertical distribution of vector abundance and its association with vector -borne diseases can be obtained with low cost remote sensing and GIS tool with reliable data and speed.

  17. Geometry of discrete quantum computing

    NASA Astrophysics Data System (ADS)

    Hanson, Andrew J.; Ortiz, Gerardo; Sabry, Amr; Tai, Yu-Tsung

    2013-05-01

    Conventional quantum computing entails a geometry based on the description of an n-qubit state using 2n infinite precision complex numbers denoting a vector in a Hilbert space. Such numbers are in general uncomputable using any real-world resources, and, if we have the idea of physical law as some kind of computational algorithm of the universe, we would be compelled to alter our descriptions of physics to be consistent with computable numbers. Our purpose here is to examine the geometric implications of using finite fields Fp and finite complexified fields \\mathbf {F}_{p^2} (based on primes p congruent to 3 (mod4)) as the basis for computations in a theory of discrete quantum computing, which would therefore become a computable theory. Because the states of a discrete n-qubit system are in principle enumerable, we are able to determine the proportions of entangled and unentangled states. In particular, we extend the Hopf fibration that defines the irreducible state space of conventional continuous n-qubit theories (which is the complex projective space \\mathbf {CP}^{2^{n}-1}) to an analogous discrete geometry in which the Hopf circle for any n is found to be a discrete set of p + 1 points. The tally of unit-length n-qubit states is given, and reduced via the generalized Hopf fibration to \\mathbf {DCP}^{2^{n}-1}, the discrete analogue of the complex projective space, which has p^{2^{n}-1} (p-1)\\,\\prod _{k=1}^{n-1} ( p^{2^{k}}+1) irreducible states. Using a measure of entanglement, the purity, we explore the entanglement features of discrete quantum states and find that the n-qubit states based on the complexified field \\mathbf {F}_{p^2} have pn(p - 1)n unentangled states (the product of the tally for a single qubit) with purity 1, and they have pn + 1(p - 1)(p + 1)n - 1 maximally entangled states with purity zero.

  18. Special Relativity

    NASA Astrophysics Data System (ADS)

    Dixon, W. G.

    1982-11-01

    Preface; 1. The physics of space and time; 2. Affine spaces in mathematics and physics; 3. Foundations of dynamics; 4. Relativistic simple fluids; 5. Electrodynamics of polarisable fluids; Appendix: Vector and dyadic notation in three dimensions; Publications referred to in the text; Summary and index of symbols and conventions; Subject index.

  19. Plant Seeds as Model Vectors for the Transfer of Life Through Space

    NASA Astrophysics Data System (ADS)

    Tepfer, David; Leach, Sydney

    2006-12-01

    We consider plant seeds as terrestrial models for a vectored life form that could protect biological information in space. Seeds consist of maternal tissue surrounding and protecting an embryo. Some seeds resist deleterious conditions found in space: ultra low vacuum, extreme temperatures and radiation, including intense UV light. In a receptive environment, seeds could liberate a viable embryo, viable higher cells or a viable free-living organism (an endosymbiont or endophyte). Even if viability is lost, seeds still contain functional macro and small molecules (DNA, RNA, proteins, amino acids, lipids, etc.) that could provide the chemical basis for starting or modifying life. The possible release of endophytes or endosymbionts from a seed-like space traveler suggests that multiple domains of life, defined in DNA sequence phylogenies, could be disseminated simultaneously from Earth. We consider the possibility of exospermia, the outward transfer of life, as well as introspermia, the inward transfer of life-both as a contemporary and ancient events.

  20. Horizon as critical phenomenon

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Sik

    2016-09-01

    We show that renormalization group flow can be viewed as a gradual wave function collapse, where a quantum state associated with the action of field theory evolves toward a final state that describes an IR fixed point. The process of collapse is described by the radial evolution in the dual holographic theory. If the theory is in the same phase as the assumed IR fixed point, the initial state is smoothly projected to the final state. If in a different phase, the initial state undergoes a phase transition which in turn gives rise to a horizon in the bulk geometry. We demonstrate the connection between critical behavior and horizon in an example, by deriving the bulk metrics that emerge in various phases of the U( N ) vector model in the large N limit based on the holographic dual constructed from quantum renormalization group. The gapped phase exhibits a geometry that smoothly ends at a finite proper distance in the radial direction. The geometric distance in the radial direction measures a complexity: the depth of renormalization group transformation that is needed to project the generally entangled UV state to a direct product state in the IR. For gapless states, entanglement persistently spreads out to larger length scales, and the initial state can not be projected to the direct product state. The obstruction to smooth projection at charge neutral point manifests itself as the long throat in the anti-de Sitter space. The Poincare horizon at infinity marks the critical point which exhibits a divergent length scale in the spread of entanglement. For the gapless states with non-zero chemical potential, the bulk space becomes the Lifshitz geometry with the dynamical critical exponent two. The identification of horizon as critical point may provide an explanation for the universality of horizon. We also discuss the structure of the bulk tensor network that emerges from the quantum renormalization group.

  1. Space Science

    NASA Image and Video Library

    2002-04-01

    Using the Solar Vector Magnetograph, a solar observation facility at NASA's Marshall Space Flight Center (MSFC), scientists from the National Space Science and Technology Center (NSSTC) in Huntsville, Alabama, are monitoring the explosive potential of magnetic areas of the Sun. This effort could someday lead to better prediction of severe space weather, a phenomenon that occurs when blasts of particles and magnetic fields from the Sun impact the magnetosphere, the magnetic bubble around the Earth. When massive solar explosions, known as coronal mass ejections, blast through the Sun's outer atmosphere and plow toward Earth at speeds of thousands of miles per second, the resulting effects can be harmful to communication satellites and astronauts outside the Earth's magnetosphere. Like severe weather on Earth, severe space weather can be costly. On the ground, magnetic storms wrought by these solar particles can knock out electric power. Photographed are a group of contributing researchers in front of the Solar Vector Magnetograph at MSFC. The researchers are part of NSSTC's solar physics group, which develops instruments for measuring magnetic fields on the Sun. With these instruments, the group studies the origin, structure, and evolution of the solar magnetic fields and the impact they have on Earth's space environment.

  2. Thrust vectoring for lateral-directional stability

    NASA Technical Reports Server (NTRS)

    Peron, Lee R.; Carpenter, Thomas

    1992-01-01

    The advantages and disadvantages of using thrust vectoring for lateral-directional control and the effects of reducing the tail size of a single-engine aircraft were investigated. The aerodynamic characteristics of the F-16 aircraft were generated by using the Aerodynamic Preliminary Analysis System II panel code. The resulting lateral-directional linear perturbation analysis of a modified F-16 aircraft with various tail sizes and yaw vectoring was performed at several speeds and altitudes to determine the stability and control trends for the aircraft compared to these trends for a baseline aircraft. A study of the paddle-type turning vane thrust vectoring control system as used on the National Aeronautics and Space Administration F/A-18 High Alpha Research Vehicle is also presented.

  3. Discontinuous finite element method for vector radiative transfer

    NASA Astrophysics Data System (ADS)

    Wang, Cun-Hai; Yi, Hong-Liang; Tan, He-Ping

    2017-03-01

    The discontinuous finite element method (DFEM) is applied to solve the vector radiative transfer in participating media. The derivation in a discrete form of the vector radiation governing equations is presented, in which the angular space is discretized by the discrete-ordinates approach with a local refined modification, and the spatial domain is discretized into finite non-overlapped discontinuous elements. The elements in the whole solution domain are connected by modelling the boundary numerical flux between adjacent elements, which makes the DFEM numerically stable for solving radiative transfer equations. Several various problems of vector radiative transfer are tested to verify the performance of the developed DFEM, including vector radiative transfer in a one-dimensional parallel slab containing a Mie/Rayleigh/strong forward scattering medium and a two-dimensional square medium. The fact that DFEM results agree very well with the benchmark solutions in published references shows that the developed DFEM in this paper is accurate and effective for solving vector radiative transfer problems.

  4. Modular space station Phase B extension preliminary performance specification. Volume 2: Project

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The four systems of the modular space station project are described, and the interfaces between this project and the shuttle project, the tracking and data relay satellite project, and an arbitrarily defined experiment project are defined. The experiment project was synthesized from internal experiments, detached research and application modules, and attached research and application modules to derive a set of interface requirements which will support multiple combinations of these elements expected during the modular space station mission. The modular space station project element defines a 6-man orbital program capable of growth to a 12-man orbital program capability. The modular space station project element specification defines the modular space station system, the premission operations support system, the mission operations support system, and the cargo module system and their interfaces.

  5. Displacement field for an edge dislocation in a layered half-space

    USGS Publications Warehouse

    Savage, J.C.

    1998-01-01

    The displacement field for an edge dislocation in an Earth model consisting of a layer welded to a half-space of different material is found in the form of a Fourier integral following the method given by Weeks et al. [1968]. There are four elementary solutions to be considered: the dislocation is either in the half-space or the layer and the Burgers vector is either parallel or perpendicular to the layer. A general two-dimensional solution for a dip-slip faulting or dike injection (arbitrary dip) can be constructed from a superposition of these elementary solutions. Surface deformations have been calculated for an edge dislocation located at the interface with Burgers vector inclined 0??, 30??, 60??, and 90?? to the interface for the case where the rigidity of the layer is half of that of the half-space and the Poisson ratios are the same. Those displacement fields have been compared to the displacement fields generated by similarly situated edge dislocations in a uniform half-space. The surface displacement field produced by the edge dislocation in the layered half-space is very similar to that produced by an edge dislocation at a different depth in a uniform half-space. In general, a low-modulus (high-modulus) layer causes the half-space equivalent dislocation to appear shallower (deeper) than the actual dislocation in the layered half-space.

  6. Student Solution Manual for Essential Mathematical Methods for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-02-01

    1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics.

  7. Essential Mathematical Methods for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-02-01

    1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics; Appendices; Index.

  8. Vector Fluxgate Magnetometer (VMAG) Development for DSX

    DTIC Science & Technology

    2008-05-19

    AFRL-RV-HA-TR-2008-1108 Vector Fluxgate Magnetometer (VMAG) Development for DSX Mark B. Moldwin Q. O O O I- UCLA Q Institute of...for Public Release; Distribution Unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT UCLA is building a three-axis fluxgate magnetometer for the Air... fluxgate magnetometer provides the necessary data to support both the Space Weather (SWx) specification and mapping requirements and the WPIx

  9. Assessing semantic similarity of texts - Methods and algorithms

    NASA Astrophysics Data System (ADS)

    Rozeva, Anna; Zerkova, Silvia

    2017-12-01

    Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.

  10. Using Grid Cells for Navigation.

    PubMed

    Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil

    2015-08-05

    Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this "vector navigation" relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Remote sensing of surface currents with single shipborne high-frequency surface wave radar

    NASA Astrophysics Data System (ADS)

    Wang, Zhongbao; Xie, Junhao; Ji, Zhenyuan; Quan, Taifan

    2016-01-01

    High-frequency surface wave radar (HFSWR) is a useful technology for remote sensing of surface currents. It usually requires two (or more) stations spaced apart to create a two-dimensional (2D) current vector field. However, this method can only obtain the measurements within the overlapping coverage, which wastes most of the data from only one radar observation. Furthermore, it increases observation's costs significantly. To reduce the number of required radars and increase the ocean area that can be measured, this paper proposes an economical methodology for remote sensing of the 2D surface current vector field using single shipborne HFSWR. The methodology contains two parts: (1) a real space-time multiple signal classification (MUSIC) based on sparse representation and unitary transformation techniques is developed for measuring the radial currents from the spreading first-order spectra, and (2) the stream function method is introduced to obtain the 2D surface current vector field. Some important conclusions are drawn, and simulations are included to validate the correctness of them.

  12. The potential of latent semantic analysis for machine grading of clinical case summaries.

    PubMed

    Kintsch, Walter

    2002-02-01

    This paper introduces latent semantic analysis (LSA), a machine learning method for representing the meaning of words, sentences, and texts. LSA induces a high-dimensional semantic space from reading a very large amount of texts. The meaning of words and texts can be represented as vectors in this space and hence can be compared automatically and objectively. A generative theory of the mental lexicon based on LSA is described. The word vectors LSA constructs are context free, and each word, irrespective of how many meanings or senses it has, is represented by a single vector. However, when a word is used in different contexts, context appropriate word senses emerge. Several applications of LSA to educational software are described, involving the ability of LSA to quickly compare the content of texts, such as an essay written by a student and a target essay. An LSA-based software tool is sketched for machine grading of clinical case summaries written by medical students.

  13. Generation of vector beams using a double-wedge depolarizer: Non-quantum entanglement

    NASA Astrophysics Data System (ADS)

    Samlan, C. T.; Viswanathan, Nirmal K.

    2016-07-01

    Propagation of horizontally polarized Gaussian beam through a double-wedge depolarizer generates vector beams with spatially varying state of polarization. Jones calculus is used to show that such beams are maximally nonseparable on the basis of even (Gaussian)-odd (Hermite-Gaussian) mode parity and horizontal-vertical polarization state. The maximum nonseparability in the two degrees of freedom of the vector beam at the double wedge depolarizer output is verified experimentally using a modified Sagnac interferometer and linear analyser projected interferograms to measure the concurrence 0.94±0.002 and violation of Clauser-Horne-Shimony-Holt form of Bell-like inequality 2.704±0.024. The investigation is carried out in the context of the use of vector beams for metrological applications.

  14. Adaptive track scheduling to optimize concurrency and vectorization in GeantV

    DOE PAGES

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...

    2015-05-22

    The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The modelmore » has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. Lastly, this work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.« less

  15. Current Knowledge of Leishmania Vectors in Mexico: How Geographic Distributions of Species Relate to Transmission Areas

    PubMed Central

    González, Camila; Rebollar-Téllez, Eduardo A.; Ibáñez-Bernal, Sergio; Becker-Fauser, Ingeborg; Martínez-Meyer, Enrique; Peterson, A. Townsend; Sánchez-Cordero, Víctor

    2011-01-01

    Leishmaniases are a group of vector-borne diseases with different clinical manifestations caused by parasites transmitted by sand fly vectors. In Mexico, the sand fly Lutzomyia olmeca olmeca is the only vector proven to transmit the parasite Leishmania mexicana to humans, which causes leishmaniasis. Other vector species with potential medical importance have been obtained, but their geographic distributions and relation to transmission areas have never been assessed. We modeled the ecological niches of nine sand fly species and projected niches to estimate potential distributions by using known occurrences, environmental coverages, and the algorithms GARP and Maxent. All vector species were distributed in areas with known recurrent transmission, except for Lu. diabolica, which appeared to be related only to areas of occasional transmission in northern Mexico. The distribution of Lu. o. olmeca does not overlap with all reported cutaneous leishmaniasis cases, suggesting that Lu. cruciata and Lu. shannoni are likely also involved as primary vectors in those areas. Our study provides useful information of potential risk areas of leishmaniasis transmission in Mexico. PMID:22049037

  16. Challenging assumptions of notational transparency: the case of vectors in engineering mathematics

    NASA Astrophysics Data System (ADS)

    Craig, Tracy S.

    2017-11-01

    The notation for vector analysis has a contentious nineteenth century history, with many different notations describing the same or similar concepts competing for use. While the twentieth century has seen a great deal of unification in vector analysis notation, variation still remains. In this paper, the two primary notations used for expressing the components of a vector are discussed in historical and current context. Popular mathematical texts use the two notations as if they are transparent and interchangeable. In this research project, engineering students' proficiency at vector analysis was assessed and the data were analyzed using the Rasch measurement method. Results indicate that the students found items expressed in unit vector notation more difficult than those expressed in parenthesis notation. The expert experience of notation as transparent and unproblematically symbolic of underlying processes independent of notation is shown to contrast with the student experience where the less familiar notation is experienced as harder to work with.

  17. Predicted altitudinal shifts and reduced spatial distribution of Leishmania infantum vector species under climate change scenarios in Colombia.

    PubMed

    González, Camila; Paz, Andrea; Ferro, Cristina

    2014-01-01

    Visceral leishmaniasis (VL) is caused by the trypanosomatid parasite Leishmania infantum (=Leishmania chagasi), and is epidemiologically relevant due to its wide geographic distribution, the number of annual cases reported and the increase in its co-infection with HIV. Two vector species have been incriminated in the Americas: Lutzomyia longipalpis and Lutzomyia evansi. In Colombia, L. longipalpis is distributed along the Magdalena River Valley while L. evansi is only found in the northern part of the Country. Regarding the epidemiology of the disease, in Colombia the incidence of VL has decreased over the last few years without any intervention being implemented. Additionally, changes in transmission cycles have been reported with urban transmission occurring in the Caribbean Coast. In Europe and North America climate change seems to be driving a latitudinal shift of leishmaniasis transmission. Here, we explored the spatial distribution of the two known vector species of L. infantum in Colombia and projected its future distribution into climate change scenarios to establish the expansion potential of the disease. An updated database including L. longipalpis and L. evansi collection records from Colombia was compiled. Ecological niche models were performed for each species using the Maxent software and 13 Worldclim bioclimatic coverages. Projections were made for the pessimistic CSIRO A2 scenario, which predicts the higher increase in temperature due to non-emission reduction, and the optimistic Hadley B2 Scenario predicting the minimum increase in temperature. The database contained 23 records for L. evansi and 39 records for L. longipalpis, distributed along the Magdalena River Valley and the Caribbean Coast, where the potential distribution areas of both species were also predicted by Maxent. Climate change projections showed a general overall reduction in the spatial distribution of the two vector species, promoting a shift in altitudinal distribution for L. longipalpis and confining L. evansi to certain regions in the Caribbean Coast. Altitudinal shifts have been reported for cutaneous leishmaniasis vectors in Colombia and Peru. Here, we predict the same outcome for VL vectors in Colombia. Changes in spatial distribution patterns could be affecting local abundances due to climatic pressures on vector populations thus reducing the incidence of human cases. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  18. A simple map-based localization strategy using range measurements

    NASA Astrophysics Data System (ADS)

    Moore, Kevin L.; Kutiyanawala, Aliasgar; Chandrasekharan, Madhumita

    2005-05-01

    In this paper we present a map-based approach to localization. We consider indoor navigation in known environments based on the idea of a "vector cloud" by observing that any point in a building has an associated vector defining its distance to the key structural components (e.g., walls, ceilings, etc.) of the building in any direction. Given a building blueprint we can derive the "ideal" vector cloud at any point in space. Then, given measurements from sensors on the robot we can compare the measured vector cloud to the possible vector clouds cataloged from the blueprint, thus determining location. We present algorithms for implementing this approach to localization, using the Hamming norm, the 1-norm, and the 2-norm. The effectiveness of the approach is verified by experiments on a 2-D testbed using a mobile robot with a 360° laser range-finder and through simulation analysis of robustness.

  19. Tensor Sparse Coding for Positive Definite Matrices.

    PubMed

    Sivalingam, Ravishankar; Boley, Daniel; Morellas, Vassilios; Papanikolopoulos, Nikos

    2013-08-02

    In recent years, there has been extensive research on sparse representation of vector-valued signals. In the matrix case, the data points are merely vectorized and treated as vectors thereafter (for e.g., image patches). However, this approach cannot be used for all matrices, as it may destroy the inherent structure of the data. Symmetric positive definite (SPD) matrices constitute one such class of signals, where their implicit structure of positive eigenvalues is lost upon vectorization. This paper proposes a novel sparse coding technique for positive definite matrices, which respects the structure of the Riemannian manifold and preserves the positivity of their eigenvalues, without resorting to vectorization. Synthetic and real-world computer vision experiments with region covariance descriptors demonstrate the need for and the applicability of the new sparse coding model. This work serves to bridge the gap between the sparse modeling paradigm and the space of positive definite matrices.

  20. Tensor sparse coding for positive definite matrices.

    PubMed

    Sivalingam, Ravishankar; Boley, Daniel; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2014-03-01

    In recent years, there has been extensive research on sparse representation of vector-valued signals. In the matrix case, the data points are merely vectorized and treated as vectors thereafter (for example, image patches). However, this approach cannot be used for all matrices, as it may destroy the inherent structure of the data. Symmetric positive definite (SPD) matrices constitute one such class of signals, where their implicit structure of positive eigenvalues is lost upon vectorization. This paper proposes a novel sparse coding technique for positive definite matrices, which respects the structure of the Riemannian manifold and preserves the positivity of their eigenvalues, without resorting to vectorization. Synthetic and real-world computer vision experiments with region covariance descriptors demonstrate the need for and the applicability of the new sparse coding model. This work serves to bridge the gap between the sparse modeling paradigm and the space of positive definite matrices.

Top