Human action classification using procrustes shape theory
NASA Astrophysics Data System (ADS)
Cho, Wanhyun; Kim, Sangkyoon; Park, Soonyoung; Lee, Myungeun
2015-02-01
In this paper, we propose new method that can classify a human action using Procrustes shape theory. First, we extract a pre-shape configuration vector of landmarks from each frame of an image sequence representing an arbitrary human action, and then we have derived the Procrustes fit vector for pre-shape configuration vector. Second, we extract a set of pre-shape vectors from tanning sample stored at database, and we compute a Procrustes mean shape vector for these preshape vectors. Third, we extract a sequence of the pre-shape vectors from input video, and we project this sequence of pre-shape vectors on the tangent space with respect to the pole taking as a sequence of mean shape vectors corresponding with a target video. And we calculate the Procrustes distance between two sequences of the projection pre-shape vectors on the tangent space and the mean shape vectors. Finally, we classify the input video into the human action class with minimum Procrustes distance. We assess a performance of the proposed method using one public dataset, namely Weizmann human action dataset. Experimental results reveal that the proposed method performs very good on this dataset.
NASA Astrophysics Data System (ADS)
Benioff, Paul
2015-05-01
The purpose of this paper is to put the description of number scaling and its effects on physics and geometry on a firmer foundation, and to make it more understandable. A main point is that two different concepts, number and number value are combined in the usual representations of number structures. This is valid as long as just one structure of each number type is being considered. It is not valid when different structures of each number type are being considered. Elements of base sets of number structures, considered by themselves, have no meaning. They acquire meaning or value as elements of a number structure. Fiber bundles over a space or space time manifold, M, are described. The fiber consists of a collection of many real or complex number structures and vector space structures. The structures are parameterized by a real or complex scaling factor, s. A vector space at a fiber level, s, has, as scalars, real or complex number structures at the same level. Connections are described that relate scalar and vector space structures at both neighbor M locations and at neighbor scaling levels. Scalar and vector structure valued fields are described and covariant derivatives of these fields are obtained. Two complex vector fields, each with one real and one imaginary field, appear, with one complex field associated with positions in M and the other with position dependent scaling factors. A derivation of the covariant derivative for scalar and vector valued fields gives the same vector fields. The derivation shows that the complex vector field associated with scaling fiber levels is the gradient of a complex scalar field. Use of these results in gauge theory shows that the imaginary part of the vector field associated with M positions acts like the electromagnetic field. The physical relevance of the other three fields, if any, is not known.
Vectors in Use in a 3D Juggling Game Simulation
ERIC Educational Resources Information Center
Kynigos, Chronis; Latsi, Maria
2006-01-01
The new representations enabled by the educational computer game the "Juggler" can place vectors in a central role both for controlling and measuring the behaviours of objects in a virtual environment simulating motion in three-dimensional spaces. The mathematical meanings constructed by 13 year-old students in relation to vectors as…
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.
NASA Astrophysics Data System (ADS)
Gurevich, Boris M.; Tempel'man, Arcady A.
2010-05-01
For a dynamical system \\tau with 'time' \\mathbb Z^d and compact phase space X, we introduce three subsets of the space \\mathbb R^m related to a continuous function f\\colon X\\to\\mathbb R^m: the set of time means of f and two sets of space means of f, namely those corresponding to all \\tau-invariant probability measures and those corresponding to some equilibrium measures on X. The main results concern topological properties of these sets of means and their mutual position. Bibliography: 18 titles.
A unified development of several techniques for the representation of random vectors and data sets
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1973-01-01
Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.
A vector space model approach to identify genetically related diseases.
Sarkar, Indra Neil
2012-01-01
The relationship between diseases and their causative genes can be complex, especially in the case of polygenic diseases. Further exacerbating the challenges in their study is that many genes may be causally related to multiple diseases. This study explored the relationship between diseases through the adaptation of an approach pioneered in the context of information retrieval: vector space models. A vector space model approach was developed that bridges gene disease knowledge inferred across three knowledge bases: Online Mendelian Inheritance in Man, GenBank, and Medline. The approach was then used to identify potentially related diseases for two target diseases: Alzheimer disease and Prader-Willi Syndrome. In the case of both Alzheimer Disease and Prader-Willi Syndrome, a set of plausible diseases were identified that may warrant further exploration. This study furthers seminal work by Swanson, et al. that demonstrated the potential for mining literature for putative correlations. Using a vector space modeling approach, information from both biomedical literature and genomic resources (like GenBank) can be combined towards identification of putative correlations of interest. To this end, the relevance of the predicted diseases of interest in this study using the vector space modeling approach were validated based on supporting literature. The results of this study suggest that a vector space model approach may be a useful means to identify potential relationships between complex diseases, and thereby enable the coordination of gene-based findings across multiple complex diseases.
The potential of latent semantic analysis for machine grading of clinical case summaries.
Kintsch, Walter
2002-02-01
This paper introduces latent semantic analysis (LSA), a machine learning method for representing the meaning of words, sentences, and texts. LSA induces a high-dimensional semantic space from reading a very large amount of texts. The meaning of words and texts can be represented as vectors in this space and hence can be compared automatically and objectively. A generative theory of the mental lexicon based on LSA is described. The word vectors LSA constructs are context free, and each word, irrespective of how many meanings or senses it has, is represented by a single vector. However, when a word is used in different contexts, context appropriate word senses emerge. Several applications of LSA to educational software are described, involving the ability of LSA to quickly compare the content of texts, such as an essay written by a student and a target essay. An LSA-based software tool is sketched for machine grading of clinical case summaries written by medical students.
Computation of Surface Integrals of Curl Vector Fields
ERIC Educational Resources Information Center
Hu, Chenglie
2007-01-01
This article presents a way of computing a surface integral when the vector field of the integrand is a curl field. Presented in some advanced calculus textbooks such as [1], the technique, as the author experienced, is simple and applicable. The computation is based on Stokes' theorem in 3-space calculus, and thus provides not only a means to…
Non-lightlike ruled surfaces with constant curvatures in Minkowski 3-space
NASA Astrophysics Data System (ADS)
Ali, Ahmad Tawfik
We study the non-lightlike ruled surfaces in Minkowski 3-space with non-lightlike base curve c(s) =∫(αt + βn + γb)ds, where t, n, b are the tangent, principal normal and binormal vectors of an arbitrary timelike curve Γ(s). Some important results of flat, minimal, II-minimal and II-flat non-lightlike ruled surfaces are studied. Finally, the following interesting theorem is proved: the only non-zero constant mean curvature (CMC) non-lightlike ruled surface is developable timelike ruled surface generated by binormal vector.
Assessing semantic similarity of texts - Methods and algorithms
NASA Astrophysics Data System (ADS)
Rozeva, Anna; Zerkova, Silvia
2017-12-01
Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.
Laplace-Runge-Lenz vector in quantum mechanics in noncommutative space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gáliková, Veronika; Kováčik, Samuel; Prešnajder, Peter
2013-12-15
The main point of this paper is to examine a “hidden” dynamical symmetry connected with the conservation of Laplace-Runge-Lenz vector (LRL) in the hydrogen atom problem solved by means of non-commutative quantum mechanics (NCQM). The basic features of NCQM will be introduced to the reader, the key one being the fact that the notion of a point, or a zero distance in the considered configuration space, is abandoned and replaced with a “fuzzy” structure in such a way that the rotational invariance is preserved. The main facts about the conservation of LRL vector in both classical and quantum theory willmore » be reviewed. Finally, we will search for an analogy in the NCQM, provide our results and their comparison with the QM predictions. The key notions we are going to deal with are non-commutative space, Coulomb-Kepler problem, and symmetry.« less
Cohen, Trevor; Schvaneveldt, Roger W; Rindflesch, Thomas C
2009-11-14
Corpus-derived distributional models of semantic distance between terms have proved useful in a number of applications. For both theoretical and practical reasons, it is desirable to extend these models to encode discrete concepts and the ways in which they are related to one another. In this paper, we present a novel vector space model that encodes semantic predications derived from MEDLINE by the SemRep system into a compact spatial representation. The associations captured by this method are of a different and complementary nature to those derived by traditional vector space models, and the encoding of predication types presents new possibilities for knowledge discovery and information retrieval.
Regularized estimation of Euler pole parameters
NASA Astrophysics Data System (ADS)
Aktuğ, Bahadir; Yildirim, Ömer
2013-07-01
Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.
Vector-averaged gravity does not alter acetylcholine receptor single channel properties
NASA Technical Reports Server (NTRS)
Reitstetter, R.; Gruener, R.
1994-01-01
To examine the physiological sensitivity of membrane receptors to altered gravity, we examined the single channel properties of the acetylcholine receptor (AChR), in co-cultures of Xenopus myocytes and neurons, to vector-averaged gravity in the clinostat. This experimental paradigm produces an environment in which, from the cell's perspective, the gravitational vector is "nulled" by continuous averaging. In that respect, the clinostat simulates one aspect of space microgravity where the gravity force is greatly reduced. After clinorotation, the AChR channel mean open-time and conductance were statistically not different from control values but showed a rotation-dependent trend that suggests a process of cellular adaptation to clinorotation. These findings therefore suggest that the ACHR channel function may not be affected in the microgravity of space despite changes in the receptor's cellular organization.
Gaussian statistics for palaeomagnetic vectors
Love, J.J.; Constable, C.G.
2003-01-01
With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimoda) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Re??union, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.
Gaussian statistics for palaeomagnetic vectors
NASA Astrophysics Data System (ADS)
Love, J. J.; Constable, C. G.
2003-03-01
With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimodal) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Réunion, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.
NASA Astrophysics Data System (ADS)
Zimina, S. V.
2015-06-01
We present the results of statistical analysis of an adaptive antenna array tuned using the least-mean-square error algorithm with quadratic constraint on the useful-signal amplification with allowance for the weight-coefficient fluctuations. Using the perturbation theory, the expressions for the correlation function and power of the output signal of the adaptive antenna array, as well as the formula for the weight-vector covariance matrix are obtained in the first approximation. The fluctuations are shown to lead to the signal distortions at the antenna-array output. The weight-coefficient fluctuations result in the appearance of additional terms in the statistical characteristics of the antenna array. It is also shown that the weight-vector fluctuations are isotropic, i.e., identical in all directions of the weight-coefficient space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, Paul
This paper describes the effects of a complex scalar scaling field on quantum mechanics. The field origin is an extension of the gauge freedom for basis choice in gauge theories to the underlying scalar field. The extension is based on the idea that the value of a number at one space time point does not determine the value at another point. This, combined with the description of mathematical systems as structures of different types, results in the presence of separate number fields and vector spaces as structures, at different space time locations. Complex number structures and vector spaces at eachmore » location are scaled by a complex space time dependent scaling factor. The effect of this scaling factor on several physical and geometric quantities has been described in other work. Here the emphasis is on quantum mechanics of one and two particles, their states and properties. Multiparticle states are also briefly described. The effect shows as a complex, nonunitary, scalar field connection on a fiber bundle description of nonrelativistic quantum mechanics. Here, the lack of physical evidence for the presence of this field so far means that the coupling constant of this field to fermions is very small. It also means that the gradient of the field must be very small in a local region of cosmological space and time. Outside this region, there are no restrictions on the field gradient.« less
Effects of a scalar scaling field on quantum mechanics
Benioff, Paul
2016-04-18
This paper describes the effects of a complex scalar scaling field on quantum mechanics. The field origin is an extension of the gauge freedom for basis choice in gauge theories to the underlying scalar field. The extension is based on the idea that the value of a number at one space time point does not determine the value at another point. This, combined with the description of mathematical systems as structures of different types, results in the presence of separate number fields and vector spaces as structures, at different space time locations. Complex number structures and vector spaces at eachmore » location are scaled by a complex space time dependent scaling factor. The effect of this scaling factor on several physical and geometric quantities has been described in other work. Here the emphasis is on quantum mechanics of one and two particles, their states and properties. Multiparticle states are also briefly described. The effect shows as a complex, nonunitary, scalar field connection on a fiber bundle description of nonrelativistic quantum mechanics. Here, the lack of physical evidence for the presence of this field so far means that the coupling constant of this field to fermions is very small. It also means that the gradient of the field must be very small in a local region of cosmological space and time. Outside this region, there are no restrictions on the field gradient.« less
About Phase: Synthetic Aperture Radar and the Phase Retrieval
2014-03-01
Phys. Rev. A 70 (2004) 052107. 57. S . T. Flammia , A. Silberfarb, C. M. Caves, Minimal informationally complete measurements for pure states, Found...is not injective. To resolve this (technical) issue, throughout this thesis we consider sets of the form V/ S , where V is a vector space and S is a...multiplicative subgroup of the field of scalars. By this notation, we mean to identify vectors x, y ∈ V for which there exists a scalar c ∈ S such that y
Jorge-Botana, Guillermo; Olmos, Ricardo; León, José Antonio
2009-11-01
There is currently a widespread interest in indexing and extracting taxonomic information from large text collections. An example is the automatic categorization of informally written medical or psychological diagnoses, followed by the extraction of epidemiological information or even terms and structures needed to formulate guiding questions as an heuristic tool for helping doctors. Vector space models have been successfully used to this end (Lee, Cimino, Zhu, Sable, Shanker, Ely & Yu, 2006; Pakhomov, Buntrock & Chute, 2006). In this study we use a computational model known as Latent Semantic Analysis (LSA) on a diagnostic corpus with the aim of retrieving definitions (in the form of lists of semantic neighbors) of common structures it contains (e.g. "storm phobia", "dog phobia") or less common structures that might be formed by logical combinations of categories and diagnostic symptoms (e.g. "gun personality" or "germ personality"). In the quest to bring definitions into line with the meaning of structures and make them in some way representative, various problems commonly arise while recovering content using vector space models. We propose some approaches which bypass these problems, such as Kintsch's (2001) predication algorithm and some corrections to the way lists of neighbors are obtained, which have already been tested on semantic spaces in a non-specific domain (Jorge-Botana, León, Olmos & Hassan-Montero, under review). The results support the idea that the predication algorithm may also be useful for extracting more precise meanings of certain structures from scientific corpora, and that the introduction of some corrections based on vector length may increases its efficiency on non-representative terms.
Chagas disease vector control and Taylor's law
Rodríguez-Planes, Lucía I.; Gaspe, María S.; Cecere, María C.; Cardinal, Marta V.
2017-01-01
Background Large spatial and temporal fluctuations in the population density of living organisms have profound consequences for biodiversity conservation, food production, pest control and disease control, especially vector-borne disease control. Chagas disease vector control based on insecticide spraying could benefit from improved concepts and methods to deal with spatial variations in vector population density. Methodology/Principal findings We show that Taylor's law (TL) of fluctuation scaling describes accurately the mean and variance over space of relative abundance, by habitat, of four insect vectors of Chagas disease (Triatoma infestans, Triatoma guasayana, Triatoma garciabesi and Triatoma sordida) in 33,908 searches of people's dwellings and associated habitats in 79 field surveys in four districts in the Argentine Chaco region, before and after insecticide spraying. As TL predicts, the logarithm of the sample variance of bug relative abundance closely approximates a linear function of the logarithm of the sample mean of abundance in different habitats. Slopes of TL indicate spatial aggregation or variation in habitat suitability. Predictions of new mathematical models of the effect of vector control measures on TL agree overall with field data before and after community-wide spraying of insecticide. Conclusions/Significance A spatial Taylor's law identifies key habitats with high average infestation and spatially highly variable infestation, providing a new instrument for the control and elimination of the vectors of a major human disease. PMID:29190728
Free-space optical polarization demultiplexing and multiplexing by means of conical refraction.
Turpin, Alex; Loiko, Yurii; Kalkandjiev, Todor K; Mompart, Jordi
2012-10-15
Polarization demultiplexing and multiplexing by means of conical refraction is proposed to increase the channel capacity for free-space optical communication applications. The proposed technique is based on the forward-backward optical transform occurring when a light beam propagates consecutively along the optic axes of two identical biaxial crystals with opposite orientations of their conical refraction characteristic vectors. We present an experimental proof of usefulness of the conical refraction demultiplexing and multiplexing technique by increasing in one order of magnitude the channel capacity at optical frequencies in a propagation distance of 4 m.
Fundamental Principles of Classical Mechanics: a Geometrical Perspectives
NASA Astrophysics Data System (ADS)
Lam, Kai S.
2014-07-01
Classical mechanics is the quantitative study of the laws of motion for oscopic physical systems with mass. The fundamental laws of this subject, known as Newton's Laws of Motion, are expressed in terms of second-order differential equations governing the time evolution of vectors in a so-called configuration space of a system (see Chapter 12). In an elementary setting, these are usually vectors in 3-dimensional Euclidean space, such as position vectors of point particles; but typically they can be vectors in higher dimensional and more abstract spaces. A general knowledge of the mathematical properties of vectors, not only in their most intuitive incarnations as directed arrows in physical space but as elements of abstract linear vector spaces, and those of linear operators (transformations) on vector spaces as well, is then indispensable in laying the groundwork for both the physical and the more advanced mathematical - more precisely topological and geometrical - concepts that will prove to be vital in our subject. In this beginning chapter we will review these properties, and introduce the all-important related notions of dual spaces and tensor products of vector spaces. The notational convention for vectorial and tensorial indices used for the rest of this book (except when otherwise specified) will also be established...
Learned Vector-Space Models for Document Retrieval.
ERIC Educational Resources Information Center
Caid, William R.; And Others
1995-01-01
The Latent Semantic Indexing and MatchPlus systems examine similar contexts in which words appear and create representational models that capture the similarity of meaning of terms and then use the representation for retrieval. Text Retrieval Conference experiments using these systems demonstrate the computational feasibility of using…
Flow noise of an underwater vector sensor embedded in a flexible towed array.
Korenbaum, Vladimir I; Tagiltsev, Alexander A
2012-05-01
The objective of this work is to simulate the flow noise of a vector sensor embedded in a flexible towed array. The mathematical model developed, based on long-wavelength analysis of the inner space of a cylindrical multipole source, predicts the reduction of the flow noise of a vector sensor embedded in an underwater flexible towed array by means of intensimetric processing (cross-spectral density calculation of oscillatory velocity and sound-pressure-sensor responses). It is found experimentally that intensimetric processing results in flow noise reduction by 12-25 dB at mean levels and by 10-30 dB in fluctuations compared to a squared oscillatory velocity channel. The effect of flow noise suppression in the intensimetry channel relative to a squared sound pressure channel is observed, but only for frequencies above the threshold. These suppression values are 10-15 dB at mean noise levels and 3-6 dB in fluctuations. At towing velocities of 1.5-3 ms(-1) and an accumulation time of 98.3 s, the threshold frequency in fluctuations is between 30 and 45 Hz.
Airborne Evaluation and Demonstration of a Time-Based Airborne Inter-Arrival Spacing Tool
NASA Technical Reports Server (NTRS)
Lohr, Gary W.; Oseguera-Lohr, Rosa M.; Abbott, Terence S.; Capron, William R.; Howell, Charles T.
2005-01-01
An airborne tool has been developed that allows an aircraft to obtain a precise inter-arrival time-based spacing interval from the preceding aircraft. The Advanced Terminal Area Approach Spacing (ATAAS) tool uses Automatic Dependent Surveillance-Broadcast (ADS-B) data to compute speed commands for the ATAAS-equipped aircraft to obtain this inter-arrival spacing behind another aircraft. The tool was evaluated in an operational environment at the Chicago O'Hare International Airport and in the surrounding terminal area with three participating aircraft flying fixed route area navigation (RNAV) paths and vector scenarios. Both manual and autothrottle speed management were included in the scenarios to demonstrate the ability to use ATAAS with either method of speed management. The results on the overall delivery precision of the tool, based on a target spacing of 90 seconds, were a mean of 90.8 seconds with a standard deviation of 7.7 seconds. The results for the RNAV and vector cases were, respectively, M=89.3, SD=4.9 and M=91.7, SD=9.0.
Reduced multiple empirical kernel learning machine.
Wang, Zhe; Lu, MingZhe; Gao, Daqi
2015-02-01
Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3) this paper adopts the Gauss Elimination, one of the on-the-shelf techniques, to generate a basis of the original feature space, which is stable and efficient.
Simulation of an epidemic model with vector transmission
NASA Astrophysics Data System (ADS)
Dickman, Adriana G.; Dickman, Ronald
2015-03-01
We study a lattice model for vector-mediated transmission of a disease in a population consisting of two species, A and B, which contract the disease from one another. Individuals of species A are sedentary, while those of species B (the vector) diffuse in space. Examples of such diseases are malaria, dengue fever, and Pierce's disease in vineyards. The model exhibits a phase transition between an absorbing (infection free) phase and an active one as parameters such as infection rates and vector density are varied. We study the static and dynamic critical behavior of the model using initial spreading, initial decay, and quasistationary simulations. Simulations are checked against mean-field analysis. Although phase transitions to an absorbing state fall generically in the directed percolation universality class, this appears not to be the case for the present model.
Graph theory approach to the eigenvalue problem of large space structures
NASA Technical Reports Server (NTRS)
Reddy, A. S. S. R.; Bainum, P. M.
1981-01-01
Graph theory is used to obtain numerical solutions to eigenvalue problems of large space structures (LSS) characterized by a state vector of large dimensions. The LSS are considered as large, flexible systems requiring both orientation and surface shape control. Graphic interpretation of the determinant of a matrix is employed to reduce a higher dimensional matrix into combinations of smaller dimensional sub-matrices. The reduction is implemented by means of a Boolean equivalent of the original matrices formulated to obtain smaller dimensional equivalents of the original numerical matrix. Computation time becomes less and more accurate solutions are possible. An example is provided in the form of a free-free square plate. Linearized system equations and numerical values of a stiffness matrix are presented, featuring a state vector with 16 components.
A Novel Clustering Method Curbing the Number of States in Reinforcement Learning
NASA Astrophysics Data System (ADS)
Kotani, Naoki; Nunobiki, Masayuki; Taniguchi, Kenji
We propose an efficient state-space construction method for a reinforcement learning. Our method controls the number of categories with improving the clustering method of Fuzzy ART which is an autonomous state-space construction method. The proposed method represents weight vector as the mean value of input vectors in order to curb the number of new categories and eliminates categories whose state values are low to curb the total number of categories. As the state value is updated, the size of category becomes small to learn policy strictly. We verified the effectiveness of the proposed method with simulations of a reaching problem for a two-link robot arm. We confirmed that the number of categories was reduced and the agent achieved the complex task quickly.
Multiscale vector fields for image pattern recognition
NASA Technical Reports Server (NTRS)
Low, Kah-Chan; Coggins, James M.
1990-01-01
A uniform processing framework for low-level vision computing in which a bank of spatial filters maps the image intensity structure at each pixel into an abstract feature space is proposed. Some properties of the filters and the feature space are described. Local orientation is measured by a vector sum in the feature space as follows: each filter's preferred orientation along with the strength of the filter's output determine the orientation and the length of a vector in the feature space; the vectors for all filters are summed to yield a resultant vector for a particular pixel and scale. The orientation of the resultant vector indicates the local orientation, and the magnitude of the vector indicates the strength of the local orientation preference. Limitations of the vector sum method are discussed. Investigations show that the processing framework provides a useful, redundant representation of image structure across orientation and scale.
Vector calculus in non-integer dimensional space and its applications to fractal media
NASA Astrophysics Data System (ADS)
Tarasov, Vasily E.
2015-02-01
We suggest a generalization of vector calculus for the case of non-integer dimensional space. The first and second orders operations such as gradient, divergence, the scalar and vector Laplace operators for non-integer dimensional space are defined. For simplification we consider scalar and vector fields that are independent of angles. We formulate a generalization of vector calculus for rotationally covariant scalar and vector functions. This generalization allows us to describe fractal media and materials in the framework of continuum models with non-integer dimensional space. As examples of application of the suggested calculus, we consider elasticity of fractal materials (fractal hollow ball and fractal cylindrical pipe with pressure inside and outside), steady distribution of heat in fractal media, electric field of fractal charged cylinder. We solve the correspondent equations for non-integer dimensional space models.
A comparative study of linear and nonlinear anomaly detectors for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Goldberg, Hirsh; Nasrabadi, Nasser M.
2007-04-01
In this paper we implement various linear and nonlinear subspace-based anomaly detectors for hyperspectral imagery. First, a dual window technique is used to separate the local area around each pixel into two regions - an inner-window region (IWR) and an outer-window region (OWR). Pixel spectra from each region are projected onto a subspace which is defined by projection bases that can be generated in several ways. Here we use three common pattern classification techniques (Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD) Analysis, and the Eigenspace Separation Transform (EST)) to generate projection vectors. In addition to these three algorithms, the well-known Reed-Xiaoli (RX) anomaly detector is also implemented. Each of the four linear methods is then implicitly defined in a high- (possibly infinite-) dimensional feature space by using a nonlinear mapping associated with a kernel function. Using a common machine-learning technique known as the kernel trick all dot products in the feature space are replaced with a Mercer kernel function defined in terms of the original input data space. To determine how anomalous a given pixel is, we then project the current test pixel spectra and the spectral mean vector of the OWR onto the linear and nonlinear projection vectors in order to exploit the statistical differences between the IWR and OWR pixels. Anomalies are detected if the separation of the projection of the current test pixel spectra and the OWR mean spectra are greater than a certain threshold. Comparisons are made using receiver operating characteristics (ROC) curves.
Statistical properties of color-signal spaces.
Lenz, Reiner; Bui, Thanh Hai
2005-05-01
In applications of principal component analysis (PCA) it has often been observed that the eigenvector with the largest eigenvalue has only nonnegative entries when the vectors of the underlying stochastic process have only nonnegative values. This has been used to show that the coordinate vectors in PCA are all located in a cone. We prove that the nonnegativity of the first eigenvector follows from the Perron-Frobenius (and Krein-Rutman theory). Experiments show also that for stochastic processes with nonnegative signals the mean vector is often very similar to the first eigenvector. This is not true in general, but we first give a heuristical explanation why we can expect such a similarity. We then derive a connection between the dominance of the first eigenvalue and the similarity between the mean and the first eigenvector and show how to check the relative size of the first eigenvalue without actually computing it. In the last part of the paper we discuss the implication of theoretical results for multispectral color processing.
Statistical properties of color-signal spaces
NASA Astrophysics Data System (ADS)
Lenz, Reiner; Hai Bui, Thanh
2005-05-01
In applications of principal component analysis (PCA) it has often been observed that the eigenvector with the largest eigenvalue has only nonnegative entries when the vectors of the underlying stochastic process have only nonnegative values. This has been used to show that the coordinate vectors in PCA are all located in a cone. We prove that the nonnegativity of the first eigenvector follows from the Perron-Frobenius (and Krein-Rutman theory). Experiments show also that for stochastic processes with nonnegative signals the mean vector is often very similar to the first eigenvector. This is not true in general, but we first give a heuristical explanation why we can expect such a similarity. We then derive a connection between the dominance of the first eigenvalue and the similarity between the mean and the first eigenvector and show how to check the relative size of the first eigenvalue without actually computing it. In the last part of the paper we discuss the implication of theoretical results for multispectral color processing.
NASA Technical Reports Server (NTRS)
Chadwick, C.
1984-01-01
This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.
Li, Chun-Fang
2007-12-15
A unified description of free-space cylindrical vector beams is presented that is an integral transformation solution to the vector Helmholtz equation and the transversality condition. In the paraxial condition, this solution not only includes the known J(1) Bessel-Gaussian vector beam and the axisymmetric Laguerre-Gaussian vector beam that were obtained by solving the paraxial wave equations but also predicts two kinds of vector beam, called a modified Bessel-Gaussian vector beam.
NASA Astrophysics Data System (ADS)
Field, J. H.
2006-06-01
It is demonstrated how the right-hand sides of the Lorentz transformation equations may be written, in a Lorentz-invariant manner, as 4-vector scalar products. This implies the existence of invariant length intervals analogous to invariant proper time intervals. An important distinction between the physical meanings of the space time and energy momentum 4-vectors is pointed out. The formalism is shown to provide a short derivation of the Lorentz force law of classical electrodynamics, and the conventional definition of the magnetic field, in terms of spatial derivatives of the 4-vector potential, as well as the Faraday Lenz law and the Gauss law for magnetic fields. The connection between the Gauss law for the electric field and the electrodynamic Ampère law, due to the 4-vector character of the electromagnetic potential, is also pointed out.
DOE Office of Scientific and Technical Information (OSTI.GOV)
García-Sánchez, Tania; Gómez-Lázaro, Emilio; Muljadi, E.
An alternative approach to characterise real voltage dips is proposed and evaluated in this study. The proposed methodology is based on voltage-space vector solutions, identifying parameters for ellipses trajectories by using the least-squares algorithm applied on a sliding window along the disturbance. The most likely patterns are then estimated through a clustering process based on the k-means algorithm. The objective is to offer an efficient and easily implemented alternative to characterise faults and visualise the most likely instantaneous phase-voltage evolution during events through their corresponding voltage-space vector trajectories. This novel solution minimises the data to be stored but maintains extensivemore » information about the dips including starting and ending transients. The proposed methodology has been applied satisfactorily to real voltage dips obtained from intensive field-measurement campaigns carried out in a Spanish wind power plant up to a time period of several years. A comparison to traditional minimum root mean square-voltage and time-duration classifications is also included in this study.« less
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
Marelli, Marco; Baroni, Marco
2015-07-01
The present work proposes a computational model of morpheme combination at the meaning level. The model moves from the tenets of distributional semantics, and assumes that word meanings can be effectively represented by vectors recording their co-occurrence with other words in a large text corpus. Given this assumption, affixes are modeled as functions (matrices) mapping stems onto derived forms. Derived-form meanings can be thought of as the result of a combinatorial procedure that transforms the stem vector on the basis of the affix matrix (e.g., the meaning of nameless is obtained by multiplying the vector of name with the matrix of -less). We show that this architecture accounts for the remarkable human capacity of generating new words that denote novel meanings, correctly predicting semantic intuitions about novel derived forms. Moreover, the proposed compositional approach, once paired with a whole-word route, provides a new interpretative framework for semantic transparency, which is here partially explained in terms of ease of the combinatorial procedure and strength of the transformation brought about by the affix. Model-based predictions are in line with the modulation of semantic transparency on explicit intuitions about existing words, response times in lexical decision, and morphological priming. In conclusion, we introduce a computational model to account for morpheme combination at the meaning level. The model is data-driven, theoretically sound, and empirically supported, and it makes predictions that open new research avenues in the domain of semantic processing. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zemach, Charles; Kurien, Susan
These notes present an account of the Local Wave Vector (LWV) model of a turbulent flow defined throughout physical space. The previously-developed Local Wave Number (LWN) model is taken as a point of departure. Some general properties of turbulent fields and appropriate notation are given first. The LWV model is presently restricted to incompressible flows and the incompressibility assumption is introduced at an early point in the discussion. The assumption that the turbulence is homogeneous is also introduced early on. This assumption can be relaxed by generalizing the space diffusion terms of LWN, but the present discussion is focused onmore » a modeling of homogeneous turbulence.« less
Visualization and Analysis of Geology Word Vectors for Efficient Information Extraction
NASA Astrophysics Data System (ADS)
Floyd, J. S.
2016-12-01
When a scientist begins studying a new geographic region of the Earth, they frequently begin by gathering relevant scientific literature in order to understand what is known, for example, about the region's geologic setting, structure, stratigraphy, and tectonic and environmental history. Experienced scientists typically know what keywords to seek and understand that if a document contains one important keyword, then other words in the document may be important as well. Word relationships in a document give rise to what is known in linguistics as the context-dependent nature of meaning. For example, the meaning of the word `strike' in geology, as in the strike of a fault, is quite different from its popular meaning in baseball. In addition, word order, such as in the phrase `Cretaceous-Tertiary boundary,' often corresponds to the order of sequences in time or space. The context of words and the relevance of words to each other can be derived quantitatively by machine learning vector representations of words. Here we show the results of training a neural network to create word vectors from scientific research papers from selected rift basins and mid-ocean ridges: the Woodlark Basin of Papua New Guinea, the Hess Deep rift, and the Gulf of Mexico basin. The word vectors are statistically defined by surrounding words within a given window, limited by the length of each sentence. The word vectors are analyzed by their cosine distance to related words (e.g., `axial' and `magma'), classified by high dimensional clustering, and visualized by reducing the vector dimensions and plotting the vectors on a two- or three-dimensional graph. Similarity analysis of `Triassic' and `Cretaceous' returns `Jurassic' as the nearest word vector, suggesting that the model is capable of learning the geologic time scale. Similarity analysis of `basalt' and `minerals' automatically returns mineral names such as `chlorite', `plagioclase,' and `olivine.' Word vector analysis and visualization allow one to extract information from hundreds of papers or more and find relationships in less time than it would take to read all of the papers. As machine learning tools become more commonly available, more and more scientists will be able to use and refine these tools for their individual needs.
The Vector Space as a Unifying Concept in School Mathematics.
ERIC Educational Resources Information Center
Riggle, Timothy Andrew
The purpose of this study was to show how the concept of vector space can serve as a unifying thread for mathematics programs--elementary school to pre-calculus college level mathematics. Indicated are a number of opportunities to demonstrate how emphasis upon the vector space structure can enhance the organization of the mathematics curriculum.…
ERIC Educational Resources Information Center
Aminu, Abdulhadi
2010-01-01
By rhotrix we understand an object that lies in some way between (n x n)-dimensional matrices and (2n - 1) x (2n - 1)-dimensional matrices. Representation of vectors in rhotrices is different from the representation of vectors in matrices. A number of vector spaces in matrices and their properties are known. On the other hand, little seems to be…
Bounded-Angle Iterative Decoding of LDPC Codes
NASA Technical Reports Server (NTRS)
Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2009-01-01
Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).
Semantically enabled image similarity search
NASA Astrophysics Data System (ADS)
Casterline, May V.; Emerick, Timothy; Sadeghi, Kolia; Gosse, C. A.; Bartlett, Brent; Casey, Jason
2015-05-01
Georeferenced data of various modalities are increasingly available for intelligence and commercial use, however effectively exploiting these sources demands a unified data space capable of capturing the unique contribution of each input. This work presents a suite of software tools for representing geospatial vector data and overhead imagery in a shared high-dimension vector or embedding" space that supports fused learning and similarity search across dissimilar modalities. While the approach is suitable for fusing arbitrary input types, including free text, the present work exploits the obvious but computationally difficult relationship between GIS and overhead imagery. GIS is comprised of temporally-smoothed but information-limited content of a GIS, while overhead imagery provides an information-rich but temporally-limited perspective. This processing framework includes some important extensions of concepts in literature but, more critically, presents a means to accomplish them as a unified framework at scale on commodity cloud architectures.
NASA Astrophysics Data System (ADS)
Ledwon, Aleksandra; Bieda, Robert; Kawczyk-Krupka, Aleksandra; Polanski, Andrzej; Wojciechowski, Konrad; Latos, Wojciech; Sieron-Stoltny, Karolina; Sieron, Aleksander
2008-02-01
Background: Fluorescence diagnostics uses the ability of tissues to fluoresce after exposition to a specific wavelength of light. The change in fluorescence between normal and progression to cancer allows to see early cancer and precancerous lesions often missed by white light. Aim: To improve by computer image processing the sensitivity of fluorescence images obtained during examination of skin, oral cavity, vulva and cervix lesions, during endoscopy, cystoscopy and bronchoscopy using Xillix ONCOLIFE. Methods: Function of image f(x,y):R2 --> R 3 was transformed from original color space RGB to space in which vector of 46 values refers to every point labeled by defined xy-coordinates- f(x,y):R2 --> R 46. By means of Fisher discriminator vector of attributes of concrete point analalyzed in the image was reduced according to two defined classes defined as pathologic areas (foreground) and healthy areas (background). As a result the highest four fisher's coefficients allowing the greatest separation between points of pathologic (foreground) and healthy (background) areas were chosen. In this way new function f(x,y):R2 --> R 4 was created in which point x,y corresponds with vector Y, H, a*, c II. In the second step using Gaussian Mixtures and Expectation-Maximisation appropriate classificator was constructed. This classificator enables determination of probability that the selected pixel of analyzed image is a pathologically changed point (foreground) or healthy one (background). Obtained map of probability distribution was presented by means of pseudocolors. Results: Image processing techniques improve the sensitivity, quality and sharpness of original fluorescence images. Conclusion: Computer image processing enables better visualization of suspected areas examined by means of fluorescence diagnostics.
Thyra Abstract Interface Package
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Roscoe A.
2005-09-01
Thrya primarily defines a set of abstract C++ class interfaces needed for the development of abstract numerical atgorithms (ANAs) such as iterative linear solvers, transient solvers all the way up to optimization. At the foundation of these interfaces are abstract C++ classes for vectors, vector spaces, linear operators and multi-vectors. Also included in the Thyra package is C++ code for creating concrete vector, vector space, linear operator, and multi-vector subclasses as well as other utilities to aid in the development of ANAs. Currently, very general and efficient concrete subclass implementations exist for serial and SPMD in-core vectors and multi-vectors. Codemore » also currently exists for testing objects and providing composite objects such as product vectors.« less
Split Octonion Reformulation for Electromagnetic Chiral Media of Massive Dyons
NASA Astrophysics Data System (ADS)
Chanyal, B. C.
2017-12-01
In an explicit, unified, and covariant formulation of an octonion algebra, we study and generalize the electromagnetic chiral fields equations of massive dyons with the split octonionic representation. Starting with 2×2 Zorn’s vector matrix realization of split-octonion and its dual Euclidean spaces, we represent the unified structure of split octonionic electric and magnetic induction vectors for chiral media. As such, in present paper, we describe the chiral parameter and pairing constants in terms of split octonionic matrix representation of Drude-Born-Fedorov constitutive relations. We have expressed a split octonionic electromagnetic field vector for chiral media, which exhibits the unified field structure of electric and magnetic chiral fields of dyons. The beauty of split octonionic representation of Zorn vector matrix realization is that, the every scalar and vector components have its own meaning in the generalized chiral electromagnetism of dyons. Correspondingly, we obtained the alternative form of generalized Proca-Maxwell’s equations of massive dyons in chiral media. Furthermore, the continuity equations, Poynting theorem and wave propagation for generalized electromagnetic fields of chiral media of massive dyons are established by split octonionic form of Zorn vector matrix algebra.
New Term Weighting Formulas for the Vector Space Method in Information Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chisholm, E.; Kolda, T.G.
The goal in information retrieval is to enable users to automatically and accurately find data relevant to their queries. One possible approach to this problem i use the vector space model, which models documents and queries as vectors in the term space. The components of the vectors are determined by the term weighting scheme, a function of the frequencies of the terms in the document or query as well as throughout the collection. We discuss popular term weighting schemes and present several new schemes that offer improved performance.
NASA Astrophysics Data System (ADS)
Díaz-Michelena, M.; de Frutos, J.; Ordóñez, A. A.; Rivero, M. A.; Mesa, J. L.; González, L.; Lavín, C.; Aroca, C.; Sanz, M.; Maicas, M.; Prieto, J. L.; Cobos, P.; Pérez, M.; Kilian, R.; Baeza, O.; Langlais, B.; Thébault, E.; Grösser, J.; Pappusch, M.
2017-09-01
In space instrumentation, there is currently no instrument dedicated to susceptibly or complete magnetization measurements of rocks. Magnetic field instrument suites are generally vector (or scalar) magnetometers, which locally measure the magnetic field. When mounted on board rovers, the electromagnetic perturbations associated with motors and other elements make it difficult to reap the benefits from the inclusion of such instruments. However, magnetic characterization is essential to understand key aspects of the present and past history of planetary objects. The work presented here overcomes the limitations currently existing in space instrumentation by developing a new portable and compact multi-sensor instrument for ground breaking high-resolution magnetic characterization of planetary surfaces and sub-surfaces. This new technology introduces for the first time magnetic susceptometry (real and imaginary parts) as a complement to existing compact vector magnetometers for planetary exploration. This work aims to solve the limitations currently existing in space instrumentation by means of providing a new portable and compact multi-sensor instrument for use in space, science and planetary exploration to solve some of the open questions on the crustal and more generally planetary evolution within the Solar System.
NASA Technical Reports Server (NTRS)
Hall, Justin R.; Hastrup, Rolf C.
1990-01-01
The principal challenges in providing effective deep space navigation, telecommunications, and information management architectures and designs for Mars exploration support are presented. The fundamental objectives are to provide the mission with the means to monitor and control mission elements, obtain science, navigation, and engineering data, compute state vectors and navigate, and to move these data efficiently and automatically between mission nodes for timely analysis and decision making. New requirements are summarized, and related issues and challenges including the robust connectivity for manned and robotic links, are identified. Enabling strategies are discussed, and candidate architectures and driving technologies are described.
NASA Astrophysics Data System (ADS)
Hall, Justin R.; Hastrup, Rolf C.
1990-10-01
The principal challenges in providing effective deep space navigation, telecommunications, and information management architectures and designs for Mars exploration support are presented. The fundamental objectives are to provide the mission with the means to monitor and control mission elements, obtain science, navigation, and engineering data, compute state vectors and navigate, and to move these data efficiently and automatically between mission nodes for timely analysis and decision making. New requirements are summarized, and related issues and challenges including the robust connectivity for manned and robotic links, are identified. Enabling strategies are discussed, and candidate architectures and driving technologies are described.
Extended vector-tensor theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimura, Rampei; Naruko, Atsushi; Yoshida, Daisuke, E-mail: rampei@th.phys.titech.ac.jp, E-mail: naruko@th.phys.titech.ac.jp, E-mail: yoshida@th.phys.titech.ac.jp
Recently, several extensions of massive vector theory in curved space-time have been proposed in many literatures. In this paper, we consider the most general vector-tensor theories that contain up to two derivatives with respect to metric and vector field. By imposing a degeneracy condition of the Lagrangian in the context of ADM decomposition of space-time to eliminate an unwanted mode, we construct a new class of massive vector theories where five degrees of freedom can propagate, corresponding to three for massive vector modes and two for massless tensor modes. We find that the generalized Proca and the beyond generalized Procamore » theories up to the quartic Lagrangian, which should be included in this formulation, are degenerate theories even in curved space-time. Finally, introducing new metric and vector field transformations, we investigate the properties of thus obtained theories under such transformations.« less
Weak Compactness and Control Measures in the Space of Unbounded Measures
Brooks, James K.; Dinculeanu, Nicolae
1972-01-01
We present a synthesis theorem for a family of locally equivalent measures defined on a ring of sets. This theorem is then used to exhibit a control measure for weakly compact sets of unbounded measures. In addition, the existence of a local control measure for locally strongly bounded vector measures is proved by means of the synthesis theorem. PMID:16591980
Vector solution for the mean electromagnetic fields in a layer of random particles
NASA Technical Reports Server (NTRS)
Lang, R. H.; Seker, S. S.; Levine, D. M.
1986-01-01
The mean electromagnetic fields are found in a layer of randomly oriented particles lying over a half space. A matrix-dyadic formulation of Maxwell's equations is employed in conjunction with the Foldy-Lax approximation to obtain equations for the mean fields. A two variable perturbation procedure, valid in the limit of small fractional volume, is then used to derive uncoupled equations for the slowly varying amplitudes of the mean wave. These equations are solved to obtain explicit expressions for the mean electromagnetic fields in the slab region in the general case of arbitrarily oriented particles and arbitrary polarization of the incident radiation. Numerical examples are given for the application to remote sensing of vegetation.
Solution of the determinantal assignment problem using the Grassmann matrices
NASA Astrophysics Data System (ADS)
Karcanias, Nicos; Leventides, John
2016-02-01
The paper provides a direct solution to the determinantal assignment problem (DAP) which unifies all frequency assignment problems of the linear control theory. The current approach is based on the solvability of the exterior equation ? where ? is an n -dimensional vector space over ? which is an integral part of the solution of DAP. New criteria for existence of solution and their computation based on the properties of structured matrices are referred to as Grassmann matrices. The solvability of this exterior equation is referred to as decomposability of ?, and it is in turn characterised by the set of quadratic Plücker relations (QPRs) describing the Grassmann variety of the corresponding projective space. Alternative new tests for decomposability of the multi-vector ? are given in terms of the rank properties of the Grassmann matrix, ? of the vector ?, which is constructed by the coordinates of ?. It is shown that the exterior equation is solvable (? is decomposable), if and only if ? where ?; the solution space for a decomposable ?, is the space ?. This provides an alternative linear algebra characterisation of the decomposability problem and of the Grassmann variety to that defined by the QPRs. Further properties of the Grassmann matrices are explored by defining the Hodge-Grassmann matrix as the dual of the Grassmann matrix. The connections of the Hodge-Grassmann matrix to the solution of exterior equations are examined, and an alternative new characterisation of decomposability is given in terms of the dimension of its image space. The framework based on the Grassmann matrices provides the means for the development of a new computational method for the solutions of the exact DAP (when such solutions exist), as well as computing approximate solutions, when exact solutions do not exist.
Manifolds for pose tracking from monocular video
NASA Astrophysics Data System (ADS)
Basu, Saurav; Poulin, Joshua; Acton, Scott T.
2015-03-01
We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).
NASA Technical Reports Server (NTRS)
Bykhovskiy, E. B.; Smirnov, N. V.
1983-01-01
The Hilbert space L2(omega) of vector functions is studied. A breakdown of L2(omega) into orthogonal subspaces is discussed and the properties of the operators for projection onto these subspaces are investigated from the standpoint of preserving the differential properties of the vectors being projected. Finally, the properties of the operators are examined.
Bundles over nearly-Kahler homogeneous spaces in heterotic string theory
NASA Astrophysics Data System (ADS)
Klaput, Michael; Lukas, Andre; Matti, Cyril
2011-09-01
We construct heterotic vacua based on six-dimensional nearly-Kahler homogeneous manifolds and non-trivial vector bundles thereon. Our examples are based on three specific group coset spaces. It is shown how to construct line bundles over these spaces, compute their properties and build up vector bundles consistent with supersymmetry and anomaly cancelation. It turns out that the most interesting coset is SU(3)/U(1)2. This space supports a large number of vector bundles which lead to consistent heterotic vacua, some of them with three chiral families.
Dual Vector Spaces and Physical Singularities
NASA Astrophysics Data System (ADS)
Rowlands, Peter
Though we often refer to 3-D vector space as constructed from points, there is no mechanism from within its definition for doing this. In particular, space, on its own, cannot accommodate the singularities that we call fundamental particles. This requires a commutative combination of space as we know it with another 3-D vector space, which is dual to the first (in a physical sense). The combination of the two spaces generates a nilpotent quantum mechanics/quantum field theory, which incorporates exact supersymmetry and ultimately removes the anomalies due to self-interaction. Among the many natural consequences of the dual space formalism are half-integral spin for fermions, zitterbewegung, Berry phase and a zero norm Berwald-Moor metric for fermionic states.
Effects of OCR Errors on Ranking and Feedback Using the Vector Space Model.
ERIC Educational Resources Information Center
Taghva, Kazem; And Others
1996-01-01
Reports on the performance of the vector space model in the presence of OCR (optical character recognition) errors in information retrieval. Highlights include precision and recall, a full-text test collection, smart vector representation, impact of weighting parameters, ranking variability, and the effect of relevance feedback. (Author/LRW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less
None, None
2016-11-21
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less
Reasoning with Vectors: A Continuous Model for Fast Robust Inference.
Widdows, Dominic; Cohen, Trevor
2015-10-01
This paper describes the use of continuous vector space models for reasoning with a formal knowledge base. The practical significance of these models is that they support fast, approximate but robust inference and hypothesis generation, which is complementary to the slow, exact, but sometimes brittle behavior of more traditional deduction engines such as theorem provers. The paper explains the way logical connectives can be used in semantic vector models, and summarizes the development of Predication-based Semantic Indexing, which involves the use of Vector Symbolic Architectures to represent the concepts and relationships from a knowledge base of subject-predicate-object triples. Experiments show that the use of continuous models for formal reasoning is not only possible, but already demonstrably effective for some recognized informatics tasks, and showing promise in other traditional problem areas. Examples described in this paper include: predicting new uses for existing drugs in biomedical informatics; removing unwanted meanings from search results in information retrieval and concept navigation; type-inference from attributes; comparing words based on their orthography; and representing tabular data, including modelling numerical values. The algorithms and techniques described in this paper are all publicly released and freely available in the Semantic Vectors open-source software package.
Reasoning with Vectors: A Continuous Model for Fast Robust Inference
Widdows, Dominic; Cohen, Trevor
2015-01-01
This paper describes the use of continuous vector space models for reasoning with a formal knowledge base. The practical significance of these models is that they support fast, approximate but robust inference and hypothesis generation, which is complementary to the slow, exact, but sometimes brittle behavior of more traditional deduction engines such as theorem provers. The paper explains the way logical connectives can be used in semantic vector models, and summarizes the development of Predication-based Semantic Indexing, which involves the use of Vector Symbolic Architectures to represent the concepts and relationships from a knowledge base of subject-predicate-object triples. Experiments show that the use of continuous models for formal reasoning is not only possible, but already demonstrably effective for some recognized informatics tasks, and showing promise in other traditional problem areas. Examples described in this paper include: predicting new uses for existing drugs in biomedical informatics; removing unwanted meanings from search results in information retrieval and concept navigation; type-inference from attributes; comparing words based on their orthography; and representing tabular data, including modelling numerical values. The algorithms and techniques described in this paper are all publicly released and freely available in the Semantic Vectors open-source software package.1 PMID:26582967
All ASD complex and real 4-dimensional Einstein spaces with Λ≠0 admitting a nonnull Killing vector
NASA Astrophysics Data System (ADS)
Chudecki, Adam
2016-12-01
Anti-self-dual (ASD) 4-dimensional complex Einstein spaces with nonzero cosmological constant Λ equipped with a nonnull Killing vector are considered. It is shown that any conformally nonflat metric of such spaces can be always brought to a special form and the Einstein field equations can be reduced to the Boyer-Finley-Plebański equation (Toda field equation). Some alternative forms of the metric are discussed. All possible real slices (neutral, Euclidean and Lorentzian) of ASD complex Einstein spaces with Λ≠0 admitting a nonnull Killing vector are found.
Bratsas, Charalampos; Koutkias, Vassilis; Kaimakamis, Evangelos; Bamidis, Panagiotis; Maglaveras, Nicos
2007-01-01
Medical Computational Problem (MCP) solving is related to medical problems and their computerized algorithmic solutions. In this paper, an extension of an ontology-based model to fuzzy logic is presented, as a means to enhance the information retrieval (IR) procedure in semantic management of MCPs. We present herein the methodology followed for the fuzzy expansion of the ontology model, the fuzzy query expansion procedure, as well as an appropriate ontology-based Vector Space Model (VSM) that was constructed for efficient mapping of user-defined MCP search criteria and MCP acquired knowledge. The relevant fuzzy thesaurus is constructed by calculating the simultaneous occurrences of terms and the term-to-term similarities derived from the ontology that utilizes UMLS (Unified Medical Language System) concepts by using Concept Unique Identifiers (CUI), synonyms, semantic types, and broader-narrower relationships for fuzzy query expansion. The current approach constitutes a sophisticated advance for effective, semantics-based MCP-related IR.
Learning with LOGO: Logo and Vectors.
ERIC Educational Resources Information Center
Lough, Tom; Tipps, Steve
1986-01-01
This is the first of a two-part series on the general concept of vector space. Provides tool procedures to allow investigation of vector properties, vector addition and subtraction, and X and Y components. Lists several sources of additional vector ideas. (JM)
Managing the resilience space of the German energy system - A vector analysis.
Schlör, Holger; Venghaus, Sandra; Märker, Carolin; Hake, Jürgen-Friedrich
2018-07-15
The UN Sustainable Development Goals formulated in 2016 confirmed the sustainability concept of the Earth Summit of 1992 and supported UNEP's green economy transition concept. The transformation of the energy system (Energiewende) is the keystone of Germany's sustainability strategy and of the German green economy concept. We use ten updated energy-related indicators of the German sustainability strategy to analyse the German energy system. The development of the sustainable indicators is examined in the monitoring process by a vector analysis performed in two-dimensional Euclidean space (Euclidean plane). The aim of the novel vector analysis is to measure the current status of the Energiewende in Germany and thereby provide decision makers with information about the strains for the specific remaining pathway of the single indicators and of the total system in order to meet the sustainability targets of the Energiewende. Within this vector model, three vectors (the normative sustainable development vector, the real development vector, and the green economy vector) define the resilience space of our analysis. The resilience space encloses a number of vectors representing different pathways with different technological and socio-economic strains to achieve a sustainable development of the green economy. In this space, the decision will be made as to whether the government measures will lead to a resilient energy system or whether a readjustment of indicator targets or political measures is necessary. The vector analysis enables us to analyse both the government's ambitiousness, which is expressed in the sustainability target for the indicators at the start of the sustainability strategy representing the starting preference order of the German government (SPO) and, secondly, the current preference order of German society in order to bridge the remaining distance to reach the specific sustainability goals of the strategy summarized in the current preference order (CPO). Copyright © 2018 Elsevier Ltd. All rights reserved.
Distributions of Magnetic Field Variations, Differences and Residuals
1999-02-01
differences and residuals between two neighbouring sites (1997 data, Monte - cristo area). Each panel displays the results from a specific vector...This means, in effect, counting the number of times the absolute value increased past one of a series of regularly spaced thresholds, and tally the...results. Crossings of the zero level were not counted . Fig. 7 illustrates the binning procedure for a fictitious data set and four bin thresholds on
Adaptive Bayes classifiers for remotely sensed data
NASA Technical Reports Server (NTRS)
Raulston, H. S.; Pace, M. O.; Gonzalez, R. C.
1975-01-01
An algorithm is developed for a learning, adaptive, statistical pattern classifier for remotely sensed data. The estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest, and (2) a projection of the parameters in time and space. The results reported are for Gaussian data in which the mean vector of each class may vary with time or position after the classifier is trained.
Families of vector-like deformations of relativistic quantum phase spaces, twists and symmetries
NASA Astrophysics Data System (ADS)
Meljanac, Daniel; Meljanac, Stjepan; Pikutić, Danijel
2017-12-01
Families of vector-like deformed relativistic quantum phase spaces and corresponding realizations are analyzed. A method for a general construction of the star product is presented. The corresponding twist, expressed in terms of phase space coordinates, in the Hopf algebroid sense is presented. General linear realizations are considered and corresponding twists, in terms of momenta and Poincaré-Weyl generators or gl(n) generators are constructed and R-matrix is discussed. A classification of linear realizations leading to vector-like deformed phase spaces is given. There are three types of spaces: (i) commutative spaces, (ii) κ -Minkowski spaces and (iii) κ -Snyder spaces. The corresponding star products are (i) associative and commutative (but non-local), (ii) associative and non-commutative and (iii) non-associative and non-commutative, respectively. Twisted symmetry algebras are considered. Transposed twists and left-right dual algebras are presented. Finally, some physical applications are discussed.
Space-Time Earthquake Prediction: The Error Diagrams
NASA Astrophysics Data System (ADS)
Molchan, G.
2010-08-01
The quality of earthquake prediction is usually characterized by a two-dimensional diagram n versus τ, where n is the rate of failures-to-predict and τ is a characteristic of space-time alarm. Unlike the time prediction case, the quantity τ is not defined uniquely. We start from the case in which τ is a vector with components related to the local alarm times and find a simple structure of the space-time diagram in terms of local time diagrams. This key result is used to analyze the usual 2-d error sets { n, τ w } in which τ w is a weighted mean of the τ components and w is the weight vector. We suggest a simple algorithm to find the ( n, τ w ) representation of all random guess strategies, the set D, and prove that there exists the unique case of w when D degenerates to the diagonal n + τ w = 1. We find also a confidence zone of D on the ( n, τ w ) plane when the local target rates are known roughly. These facts are important for correct interpretation of ( n, τ w ) diagrams when we discuss the prediction capability of the data or prediction methods.
NASA Astrophysics Data System (ADS)
Mikeš, Josef; Stepanov, Sergey; Hinterleitner, Irena
2012-07-01
In our paper we have determined the dimension of the space of conformal Killing-Yano tensors and the dimensions of its two subspaces of closed conformal Killing-Yano and Killing-Yano tensors on pseudo Riemannian manifolds of constant curvature. This result is a generalization of well known results on sharp upper bounds of the dimensions of the vector spaces of conformal Killing-Yano, Killing-Yano and concircular vector fields on pseudo Riemannian manifolds of constant curvature.
1979-07-31
3 x 3 t Strain vector a ij,j Space derivative of the stress tensor Fi Force vector per unit volume o Density x CHAPTER III F Total force K Stiffness...matrix 6Vector displacements M Mass matrix B Space operating matrix DO Matrix moduli 2 x 3 DZ Operating matrix in Z direction N Matrix of shape...dissipating medium the deformation of a solid is a function of time, temperature and space . Creep phenomenon is a deformation process in which there is
The Sequential Implementation of Array Processors when there is Directional Uncertainty
1975-08-01
University of Washington kindly supplied office space and ccputing facilities. -The author hat, benefited greatly from discussions with several other...if i Q- inverse of Q I L general observation space R general vector of observation _KR general observation vector of dimension K Exiv] "Tf -- ’ -"-T’T...7" i ’i ’:"’ - ’ ; ’ ’ ’ ’ ’ ’" ’"- Glossary of Symbols (continued) R. ith observation 1 Rm real vector space of dimension m R(T) autocorrelation
Effective-medium theory of elastic waves in random networks of rods.
Katz, J I; Hoffman, J J; Conradi, M S; Miller, J G
2012-06-01
We formulate an effective medium (mean field) theory of a material consisting of randomly distributed nodes connected by straight slender rods, hinged at the nodes. Defining wavelength-dependent effective elastic moduli, we calculate both the static moduli and the dispersion relations of ultrasonic longitudinal and transverse elastic waves. At finite wave vector k the waves are dispersive, with phase and group velocities decreasing with increasing wave vector. These results are directly applicable to networks with empty pore space. They also describe the solid matrix in two-component (Biot) theories of fluid-filled porous media. We suggest the possibility of low density materials with higher ratios of stiffness and strength to density than those of foams, aerogels, or trabecular bone.
NASA Astrophysics Data System (ADS)
Avdyushev, V.; Banshchikova, M.; Chuvashov, I.; Kuzmin, A.
2017-09-01
In the paper are presented capabilities of software "Vector-M" for a diagnostics of the ionosphere state from auroral emissions images and plasma characteristics from the different orbits as a part of the system of control of space weather. The software "Vector-M" is developed by the celestial mechanics and astrometry department of Tomsk State University in collaboration with Space Research Institute (Moscow) and Central Aerological Observatory of Russian Federal Service for Hydrometeorology and Environmental Monitoring. The software "Vector-M" is intended for calculation of attendant geophysical and astronomical information for the centre of mass of the spacecraft and the space of observations in the experiment with auroral imager Aurovisor-VIS/MP in the orbit of the perspective Meteor-MP spacecraft.
NASA Astrophysics Data System (ADS)
Xue, Yan
The optimal growth and its relationship with the forecast skill of the Zebiak and Cane model are studied using a simple statistical model best fit to the original nonlinear model and local linear tangent models about idealized climatic states (the mean background and ENSO cycles in a long model run), and the actual forecast states, including two sets of runs using two different initialization procedures. The seasonally varying Markov model best fit to a suite of 3-year forecasts in a reduced EOF space (18 EOFs) fits the original nonlinear model reasonably well and has comparable or better forecast skill. The initial error growth in a linear evolution operator A is governed by the eigenvalues of A^{T}A, and the square roots of eigenvalues and eigenvectors of A^{T}A are named singular values and singular vectors. One dominant growing singular vector is found, and the optimal 6 month growth rate is largest for a (boreal) spring start and smallest for a fall start. Most of the variation in the optimal growth rate of the two forecasts is seasonal, attributable to the seasonal variations in the mean background, except that in the cold events it is substantially suppressed. It is found that the mean background (zero anomaly) is the most unstable state, and the "forecast IC states" are more unstable than the "coupled model states". One dominant growing singular vector is found, characterized by north-south and east -west dipoles, convergent winds on the equator in the eastern Pacific and a deepened thermocline in the whole equatorial belt. This singular vector is insensitive to initial time and optimization time, but its final pattern is a strong function of initial states. The ENSO system is inherently unpredictable for the dominant singular vector can amplify 5-fold to 24-fold in 6 months and evolve into the large scales characteristic of ENSO. However, the inherent ENSO predictability is only a secondary factor, while the mismatches between the model and data is a primary factor controlling the current forecast skill.
2015-11-20
between tweets and profiles as follow, • TFIDF Score, which calculates the cosine similarity between a tweet and a profile in vector space model with...TFIDF weight of terms. Vector space model is a model which represents a document as a vector. Tweets and profiles can be expressed as vectors, ~ T = (t...gain(Tr i ) (13) where Tr is the returned tweet sets, gain() is the score func- tion for a tweet. Not interesting, spam/ junk tweets receive a gain of 0
2018-01-01
Abstract We examined how attention causes neural population representations of shape and location to change in ventral stream (AIT) and dorsal stream (LIP). Monkeys performed two identical delayed-match-to-sample (DMTS) tasks, attending either to shape or location. In AIT, shapes were more discriminable when directing attention to shape rather than location, measured by an increase in mean distance between population response vectors. In LIP, attending to location rather than shape did not increase the discriminability of different stimulus locations. Even when factoring out the change in mean vector response distance, multidimensional scaling (MDS) still showed a significant task difference in AIT, but not LIP, indicating that beyond increasing discriminability, attention also causes a nonlinear warping of representation space in AIT. Despite single-cell attentional modulations in both areas, our data show that attentional modulations of population representations are weaker in LIP, likely due to a need to maintain veridical representations for visuomotor control. PMID:29876521
Trends in space activities in 2014: The significance of the space activities of governments
NASA Astrophysics Data System (ADS)
Paikowsky, Deganit; Baram, Gil; Ben-Israel, Isaac
2016-01-01
This article addresses the principal events of 2014 in the field of space activities, and extrapolates from them the primary trends that can be identified in governmental space activities. In 2014, global space activities centered on two vectors. The first was geopolitical, and the second relates to the matrix between increasing commercial space activities and traditional governmental space activities. In light of these two vectors, the article outlines and analyzes trends of space exploration, human spaceflights, industry and technology, cooperation versus self-reliance, and space security and sustainability. It also reviews the space activities of the leading space-faring nations.
Discrete Fourier Transform in a Complex Vector Space
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2015-01-01
An image-based phase retrieval technique has been developed that can be used on board a space based iterative transformation system. Image-based wavefront sensing is computationally demanding due to the floating-point nature of the process. The discrete Fourier transform (DFT) calculation is presented in "diagonal" form. By diagonal we mean that a transformation of basis is introduced by an application of the similarity transform of linear algebra. The current method exploits the diagonal structure of the DFT in a special way, particularly when parts of the calculation do not have to be repeated at each iteration to converge to an acceptable solution in order to focus an image.
3-D Vector Flow Estimation With Row-Column-Addressed Arrays.
Holbek, Simon; Christiansen, Thomas Lehrmann; Stuart, Matthias Bo; Beers, Christopher; Thomsen, Erik Vilain; Jensen, Jorgen Arendt
2016-11-01
Simulation and experimental results from 3-D vector flow estimations for a 62 + 62 2-D row-column (RC) array with integrated apodization are presented. A method for implementing a 3-D transverse oscillation (TO) velocity estimator on a 3-MHz RC array is developed and validated. First, a parametric simulation study is conducted, where flow direction, ensemble length, number of pulse cycles, steering angles, transmit/receive apodization, and TO apodization profiles and spacing are varied, to find the optimal parameter configuration. The performance of the estimator is evaluated with respect to relative mean bias ~B and mean standard deviation ~σ . Second, the optimal parameter configuration is implemented on the prototype RC probe connected to the experimental ultrasound scanner SARUS. Results from measurements conducted in a flow-rig system containing a constant laminar flow and a straight-vessel phantom with a pulsating flow are presented. Both an M-mode and a steered transmit sequence are applied. The 3-D vector flow is estimated in the flow rig for four representative flow directions. In the setup with 90° beam-to-flow angle, the relative mean bias across the entire velocity profile is (-4.7, -0.9, 0.4)% with a relative standard deviation of (8.7, 5.1, 0.8)% for ( v x , v y , v z ). The estimated peak velocity is 48.5 ± 3 cm/s giving a -3% bias. The out-of-plane velocity component perpendicular to the cross section is used to estimate volumetric flow rates in the flow rig at a 90° beam-to-flow angle. The estimated mean flow rate in this setup is 91.2 ± 3.1 L/h corresponding to a bias of -11.1%. In a pulsating flow setup, flow rate measured during five cycles is 2.3 ± 0.1 mL/stroke giving a negative 9.7% bias. It is concluded that accurate 3-D vector flow estimation can be obtained using a 2-D RC-addressed array.
Geometrization of quantum physics
NASA Astrophysics Data System (ADS)
Ol'Khov, O. A.
2009-12-01
It is shown that the Dirac equation for free particle can be considered as a description of specific distortion of the space euclidean geometry (space topological defect). This approach is based on possibility of interpretation of the wave function as vector realizing representation of the fundamental group of the closed topological space-time 4-manifold. Mass and spin appear to be topological invariants. Such concept explains all so called “strange” properties of quantum formalism: probabilities, wave-particle duality, nonlocal instantaneous correlation between noninteracting particles (EPR-paradox) and so on. Acceptance of suggested geometrical concept means rejection of atomistic concept where all matter is considered as consisting of more and more small elementary particles. There is no any particles a priori, before measurement: the notions of particles appear as a result of classical interpretation of the contact of the region of the curved space with a device.
NASA Astrophysics Data System (ADS)
Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal
2018-06-01
Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.
The SAMEX Vector Magnetograph: A Design Study for a Space-Based Solar Vector Magnetograph
NASA Technical Reports Server (NTRS)
Hagyard, M. J.; Gary, G. A.; West, E. A.
1988-01-01
This report presents the results of a pre-phase A study performed by the Marshall Space Flight Center (MSFC) for the Air Force Geophysics Laboratory (AFGL) to develop a design concept for a space-based solar vector magnetograph and hydrogen-alpha telescope. These are two of the core instruments for a proposed Air Force mission, the Solar Activities Measurement Experiments (SAMEX). This mission is designed to study the processes which give rise to activity in the solar atmosphere and to develop techniques for predicting solar activity and its effects on the terrestrial environment.
Vectoring of parallel synthetic jets: A parametric study
NASA Astrophysics Data System (ADS)
Berk, Tim; Gomit, Guillaume; Ganapathisubramani, Bharathram
2016-11-01
The vectoring of a pair of parallel synthetic jets can be described using five dimensionless parameters: the aspect ratio of the slots, the Strouhal number, the Reynolds number, the phase difference between the jets and the spacing between the slots. In the present study, the influence of the latter four on the vectoring behaviour of the jets is examined experimentally using particle image velocimetry. Time-averaged velocity maps are used to study the variations in vectoring behaviour for a parametric sweep of each of the four parameters independently. A topological map is constructed for the full four-dimensional parameter space. The vectoring behaviour is described both qualitatively and quantitatively. A vectoring mechanism is proposed, based on measured vortex positions. We acknowledge the financial support from the European Research Council (ERC Grant Agreement No. 277472).
Anisotropic fractal media by vector calculus in non-integer dimensional space
NASA Astrophysics Data System (ADS)
Tarasov, Vasily E.
2014-08-01
A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.
Evangelopoulos, Nicholas E
2013-11-01
This article reviews latent semantic analysis (LSA), a theory of meaning as well as a method for extracting that meaning from passages of text, based on statistical computations over a collection of documents. LSA as a theory of meaning defines a latent semantic space where documents and individual words are represented as vectors. LSA as a computational technique uses linear algebra to extract dimensions that represent that space. This representation enables the computation of similarity among terms and documents, categorization of terms and documents, and summarization of large collections of documents using automated procedures that mimic the way humans perform similar cognitive tasks. We present some technical details, various illustrative examples, and discuss a number of applications from linguistics, psychology, cognitive science, education, information science, and analysis of textual data in general. WIREs Cogn Sci 2013, 4:683-692. doi: 10.1002/wcs.1254 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. © 2013 John Wiley & Sons, Ltd.
Combined-probability space and certainty or uncertainty relations for a finite-level quantum system
NASA Astrophysics Data System (ADS)
Sehrawat, Arun
2017-08-01
The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.
A diagram for evaluating multiple aspects of model performance in simulating vector fields
NASA Astrophysics Data System (ADS)
Xu, Zhongfeng; Hou, Zhaolu; Han, Ying; Guo, Weidong
2016-12-01
Vector quantities, e.g., vector winds, play an extremely important role in climate systems. The energy and water exchanges between different regions are strongly dominated by wind, which in turn shapes the regional climate. Thus, how well climate models can simulate vector fields directly affects model performance in reproducing the nature of a regional climate. This paper devises a new diagram, termed the vector field evaluation (VFE) diagram, which is a generalized Taylor diagram and able to provide a concise evaluation of model performance in simulating vector fields. The diagram can measure how well two vector fields match each other in terms of three statistical variables, i.e., the vector similarity coefficient, root mean square length (RMSL), and root mean square vector difference (RMSVD). Similar to the Taylor diagram, the VFE diagram is especially useful for evaluating climate models. The pattern similarity of two vector fields is measured by a vector similarity coefficient (VSC) that is defined by the arithmetic mean of the inner product of normalized vector pairs. Examples are provided, showing that VSC can identify how close one vector field resembles another. Note that VSC can only describe the pattern similarity, and it does not reflect the systematic difference in the mean vector length between two vector fields. To measure the vector length, RMSL is included in the diagram. The third variable, RMSVD, is used to identify the magnitude of the overall difference between two vector fields. Examples show that the VFE diagram can clearly illustrate the extent to which the overall RMSVD is attributed to the systematic difference in RMSL and how much is due to the poor pattern similarity.
O Electromagnetic Power Waves and Power Density Components.
NASA Astrophysics Data System (ADS)
Petzold, Donald Wayne
1980-12-01
On January 10, 1884 Lord Rayleigh presented a paper entitled "On the Transfer of Energy in the Electromagnetic Field" to the Royal Society of London. This paper had been authored by the late Fellow of Trinity College, Cambridge, Professor J. H. Poynting and in it he claimed that there was a general law for the transfer of electromagnetic energy. He argued that associated with each point in space is a quantity, that has since been called the Poynting vector, that is a measure of the rate of energy flow per unit area. His analysis was concerned with the integration of this power density vector at all points over an enclosing surface of a specific volume. The interpretation of this Poynting vector as a true measure of the local power density was viewed with great skepticism unless the vector was integrated over a closed surface, as the development of the concept required. However, within the last decade or so Shadowitz indicates that a number of prominent authors have argued that the criticism of the interpretation of Poynting's vector as a local power density vector is unjustified. The present paper is not concerned with these arguments but instead is concerned with a decomposition of Poynting's power density vector into two and only two components: one vector which has the same direction as Poynting's vector and which is called the forward power density vector, and another vector, directed opposite to the Poynting vector and called the reverse power density vector. These new local forward and reverse power density vectors will be shown to be dependent upon forward and reverse power wave vectors and these vectors in turn will be related to newly defined forward and reverse components of the electric and magnetic fields. The sum of these forward and reverse power density vectors, which is simply the original Poynting vector, is associated with the total electromagnetic energy traveling past the local point. Another vector which is the difference between the forward and reverse power density vectors and which will be shown to be associated with the total electric and magnetic field energy densities existing at a local point will also be introduced. These local forward and reverse power density vectors may be integrated over a surface to determine the forward and reverse powers and from these results problems related to maximum power transfer or efficiency of electromagnetic energy transmission in space may be studied in a manner similar to that presently being done with transmission lines, wave guides, and more recently with two port multiport lumped parameter systems. These new forward and reverse power density vectors at a point in space are analogous to the forward and revoltages or currents and power waves as used with the transmission line, waveguide, or port. These power wave vectors in space are a generalization of the power waves as developed by Penfield, Youla, and Kurokawa and used with the scattering parameters associated with transmission lines, waveguides and ports.
Wu, Jibo
2016-01-01
In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.
Interoperability Policy Roadmap
2010-01-01
Retrieval – SMART The technique developed by Dr. Gerard Salton for automated information retrieval and text analysis is called the vector-space... Salton , G., Wong, A., Yang, C.S., “A Vector Space Model for Automatic Indexing”, Commu- nications of the ACM, 18, 613-620. [10] Salton , G., McGill
NASA Astrophysics Data System (ADS)
Brasseur, James; Paes, Paulo; Chamecki, Marcelo
2017-11-01
Large-eddy simulation (LES) of the high Reynolds number rough-wall boundary layer requires both a subfilter-scale model for the unresolved inertial term and a ``surface stress model'' (SSM) for space-time local surface momentum flux. Standard SSMs assume proportionality between the local surface shear stress vector and the local resolved-scale velocity vector at the first grid level. Because the proportionality coefficient incorporates a surface roughness scale z0 within a functional form taken from law-of-the-wall (LOTW), it is commonly stated that LOTW is ``assumed,'' and therefore ``forced'' on the LES. We show that this is not the case; the LOTW form is the ``drag law'' used to relate friction velocity to mean resolved velocity at the first grid level consistent with z0 as the height where mean velocity vanishes. Whereas standard SSMs do not force LOTW on the prediction, we show that parameterized roughness does not match ``true'' z0 when LOTW is not predicted, or does not exist. By extrapolating mean velocity, we show a serious mismatch between true z0 and parameterized z0 in the presence of a spurious ``overshoot'' in normalized mean velocity gradient. We shall discuss the source of the problem and its potential resolution.
[Personal protection measures against blood-sucking insects and ticks].
Orshan, Laor; Wilamowski, Amos; Pener, Hedva
2010-09-01
Blood-sucking arthropods are major vectors of various pathogens like viruses, bacteria, protozoa and nematodes. Preventing exposure to the vector is imperative especially when vaccine and prophylactic treatments are not available. Personal protection measures (PPM) are essential and often the only means available when dealing with blood-sucking disease transmitting arthropods. Awareness of the risk in the specific areas of travel is the first step to be taken before and while traveling. PPM include preventive personal behavior, suitable clothing, application of insect repellents to the skin, the use of space repellents, impregnation of clothing, camping gear and bed nets and, when necessary, ground spraying of insecticides. The registered and recommended active ingredients for skin application are Deet, picaridin (icaridin), p-menthane-3,8-diol (PMD) and IR3535. Volatile pyrethrins are used as space repellents while pyrethroids, especially permethrin, are employed for impregnation and for ground spraying. It is recommended to purchase only products registered in Israel or other developed countries. These products should have a detailed label specifying the concentration of the active ingredient, application instructions and the duration of protection.
Ranked centroid projection: a data visualization approach with self-organizing maps.
Yen, G G; Wu, Z
2008-02-01
The self-organizing map (SOM) is an efficient tool for visualizing high-dimensional data. In this paper, the clustering and visualization capabilities of the SOM, especially in the analysis of textual data, i.e., document collections, are reviewed and further developed. A novel clustering and visualization approach based on the SOM is proposed for the task of text mining. The proposed approach first transforms the document space into a multidimensional vector space by means of document encoding. Afterwards, a growing hierarchical SOM (GHSOM) is trained and used as a baseline structure to automatically produce maps with various levels of detail. Following the GHSOM training, the new projection method, namely the ranked centroid projection (RCP), is applied to project the input vectors to a hierarchy of 2-D output maps. The RCP is used as a data analysis tool as well as a direct interface to the data. In a set of simulations, the proposed approach is applied to an illustrative data set and two real-world scientific document collections to demonstrate its applicability.
Adherence of Myxobolus cerebralis myxospores to waders: Implications for disease dissemination
Gates, K.K.; Guy, C.S.; Zale, A.V.; Horton, T.B.
2008-01-01
The vectors involved in the spread of whirling disease, which is caused by Myxobolus cerebralis, are only partly understood. However, the parasite has rapidly become established in many regions, suggesting that it is easily disseminated. We gained insight into transport vectors by examining the surface porosity of common wading equipment materials and the adherence of M. cerebralis myxospores to them. Interstitial spaces within rubber, felt, lightweight nylon, and neoprene were measured on scanning electron microscope images. Myxospores were applied to each material, the material was rinsed, and the myxospores recovered to assess adherence. The mean interstitial space size of rubber was the smallest (2.0 ??m), whereas that of felt was the largest (31.3 ??m). The highest recovery rates were from rubber and the glass control. Percent myxospore recovery varied by material, the recovery from felt being lower than that from all other materials. The potential for felt to carry even small numbers of myxospores suggests that the introduction of M. cerebralis by felt-soled wading boots is possible. ?? Copyright by the American Fisheries Society 2008.
2008-01-09
The image data as acquired from the sensor is a data cloud in multi- dimensional space with each band generating an axis of dimension. When the data... The color of a material is defined by the direction of its unit vector in n- dimensional spectral space . The length of the vector relates only to how...to n- dimensional space . SAM determines the similarity
Development of a NEW Vector Magnetograph at Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
West, Edward; Hagyard, Mona; Gary, Allen; Smith, James; Adams, Mitzi; Rose, M. Franklin (Technical Monitor)
2001-01-01
This paper will describe the Experimental Vector Magnetograph that has been developed at the Marshall Space Flight Center (MSFC). This instrument was designed to improve linear polarization measurements by replacing electro-optic and rotating waveplate modulators with a rotating linear analyzer. Our paper will describe the motivation for developing this magnetograph, compare this instrument with traditional magnetograph designs, and present a comparison of the data acquired by this instrument and original MSFC vector magnetograph.
Onwujekwe, Obinna; Malik, El-Fatih Mohamed; Mustafa, Sara Hassan; Mnzava, Abraham
2005-12-15
In order to optimally prioritize and use public and private budgets for equitable malaria vector control, there is a need to determine the level and determinants of consumer demand for different vector control tools. To determine the demand from people of different socio-economic groups for indoor residual house-spraying (IRHS), insecticide-treated nets (ITNs), larviciding with chemicals (LWC), and space spraying/fogging (SS) and the disease control implications of the result. Ratings and levels of willingness-to-pay (WTP) for the vector control tools were determined using a random cross-sectional sample of 720 householdes drawn from two states. WTP was elicited using the bidding game. An asset-based socio-economic status (SES) index was used to explore whether WTP was related to SES of the respondents. IRHS received the highest proportion of highest preferred rating (41.0%) followed by ITNs (23.1%). However, ITNs had the highest mean WTP followed by IRHS, while LWC had the least. The regression analysis showed that SES was positively and statistically significantly related to WTP across the four vector control tools and that the respondents' rating of IRHS and ITNs significantly explained their levels of WTP for the two tools. People were willing to pay for all the vector-control tools, but the demand for the vector control tools was related to the SES of the respondents. Hence, it is vital that there are public policies and financing mechanisms to ensure equitable provision and utilisation of vector control tools, as well as protecting the poor from cost-sharing arrangements.
Representation of magnetic fields in space
NASA Technical Reports Server (NTRS)
Stern, D. P.
1975-01-01
Several methods by which a magnetic field in space can be represented are reviewed with particular attention to problems of the observed geomagnetic field. Time dependence is assumed to be negligible, and five main classes of representation are described by vector potential, scalar potential, orthogonal vectors, Euler potentials, and expanded magnetic field.
Knowledge Space: A Conceptual Basis for the Organization of Knowledge
ERIC Educational Resources Information Center
Meincke, Peter P. M.; Atherton, Pauline
1976-01-01
Proposes a new conceptual basis for visualizing the organization of information, or knowledge, which differentiates between the concept "vectors" for a field of knowledge represented in a multidimensional space, and the state "vectors" for a person based on his understanding of these concepts, and the representational…
Anisotropic fractal media by vector calculus in non-integer dimensional space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarasov, Vasily E., E-mail: tarasov@theory.sinp.msu.ru
2014-08-15
A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensionalmore » space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.« less
Color TV: total variation methods for restoration of vector-valued images.
Blomgren, P; Chan, T F
1998-01-01
We propose a new definition of the total variation (TV) norm for vector-valued functions that can be applied to restore color and other vector-valued images. The new TV norm has the desirable properties of 1) not penalizing discontinuities (edges) in the image, 2) being rotationally invariant in the image space, and 3) reducing to the usual TV norm in the scalar case. Some numerical experiments on denoising simple color images in red-green-blue (RGB) color space are presented.
ERIC Educational Resources Information Center
Vaughan, Herbert E.; Szabo, Steven
This is the teacher's edition of a text for the second year of a two-year high school geometry course. The course bases plane and solid geometry and trigonometry on the fact that the translations of a Euclidean space constitute a vector space which has an inner product. Congruence is a geometric topic reserved for Volume 2. Volume 2 opens with an…
Vectors and Rotations in 3-Dimensions: Vector Algebra for the C++ Programmer
2016-12-01
Proving Ground, MD 21005-5068 This report describes 2 C++ classes: a Vector class for performing vector algebra in 3-dimensional space ( 3D ) and a Rotation...class for performing rotations of vectors in 3D . Each class is self-contained in a single header file (Vector.h and Rotation.h) so that a C...vector, rotation, 3D , quaternion, C++ tools, rotation sequence, Euler angles, yaw, pitch, roll, orientation 98 Richard Saucier 410-278-6721Unclassified
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, M. A.; Strelchenko, Alexei; Vaquero, Alejandro
Lattice quantum chromodynamics simulations in nuclear physics have benefited from a tremendous number of algorithmic advances such as multigrid and eigenvector deflation. These improve the time to solution but do not alleviate the intrinsic memory-bandwidth constraints of the matrix-vector operation dominating iterative solvers. Batching this operation for multiple vectors and exploiting cache and register blocking can yield a super-linear speed up. Block-Krylov solvers can naturally take advantage of such batched matrix-vector operations, further reducing the iterations to solution by sharing the Krylov space between solves. However, practical implementations typically suffer from the quadratic scaling in the number of vector-vector operations.more » Using the QUDA library, we present an implementation of a block-CG solver on NVIDIA GPUs which reduces the memory-bandwidth complexity of vector-vector operations from quadratic to linear. We present results for the HISQ discretization, showing a 5x speedup compared to highly-optimized independent Krylov solves on NVIDIA's SaturnV cluster.« less
Observation of Polarization Vortices in Momentum Space
NASA Astrophysics Data System (ADS)
Zhang, Yiwen; Chen, Ang; Liu, Wenzhe; Hsu, Chia Wei; Wang, Bo; Guan, Fang; Liu, Xiaohan; Shi, Lei; Lu, Ling; Zi, Jian
2018-05-01
The vortex, a fundamental topological excitation featuring the in-plane winding of a vector field, is important in various areas such as fluid dynamics, liquid crystals, and superconductors. Although commonly existing in nature, vortices were observed exclusively in real space. Here, we experimentally observed momentum-space vortices as the winding of far-field polarization vectors in the first Brillouin zone of periodic plasmonic structures. Using homemade polarization-resolved momentum-space imaging spectroscopy, we mapped out the dispersion, lifetime, and polarization of all radiative states at the visible wavelengths. The momentum-space vortices were experimentally identified by their winding patterns in the polarization-resolved isofrequency contours and their diverging radiative quality factors. Such polarization vortices can exist robustly on any periodic systems of vectorial fields, while they are not captured by the existing topological band theory developed for scalar fields. Our work provides a new way for designing high-Q plasmonic resonances, generating vector beams, and studying topological photonics in the momentum space.
Observation of Polarization Vortices in Momentum Space.
Zhang, Yiwen; Chen, Ang; Liu, Wenzhe; Hsu, Chia Wei; Wang, Bo; Guan, Fang; Liu, Xiaohan; Shi, Lei; Lu, Ling; Zi, Jian
2018-05-04
The vortex, a fundamental topological excitation featuring the in-plane winding of a vector field, is important in various areas such as fluid dynamics, liquid crystals, and superconductors. Although commonly existing in nature, vortices were observed exclusively in real space. Here, we experimentally observed momentum-space vortices as the winding of far-field polarization vectors in the first Brillouin zone of periodic plasmonic structures. Using homemade polarization-resolved momentum-space imaging spectroscopy, we mapped out the dispersion, lifetime, and polarization of all radiative states at the visible wavelengths. The momentum-space vortices were experimentally identified by their winding patterns in the polarization-resolved isofrequency contours and their diverging radiative quality factors. Such polarization vortices can exist robustly on any periodic systems of vectorial fields, while they are not captured by the existing topological band theory developed for scalar fields. Our work provides a new way for designing high-Q plasmonic resonances, generating vector beams, and studying topological photonics in the momentum space.
Principal component analysis and the locus of the Fréchet mean in the space of phylogenetic trees.
Nye, Tom M W; Tang, Xiaoxian; Weyenberg, Grady; Yoshida, Ruriko
2017-12-01
Evolutionary relationships are represented by phylogenetic trees, and a phylogenetic analysis of gene sequences typically produces a collection of these trees, one for each gene in the analysis. Analysis of samples of trees is difficult due to the multi-dimensionality of the space of possible trees. In Euclidean spaces, principal component analysis is a popular method of reducing high-dimensional data to a low-dimensional representation that preserves much of the sample's structure. However, the space of all phylogenetic trees on a fixed set of species does not form a Euclidean vector space, and methods adapted to tree space are needed. Previous work introduced the notion of a principal geodesic in this space, analogous to the first principal component. Here we propose a geometric object for tree space similar to the [Formula: see text]th principal component in Euclidean space: the locus of the weighted Fréchet mean of [Formula: see text] vertex trees when the weights vary over the [Formula: see text]-simplex. We establish some basic properties of these objects, in particular showing that they have dimension [Formula: see text], and propose algorithms for projection onto these surfaces and for finding the principal locus associated with a sample of trees. Simulation studies demonstrate that these algorithms perform well, and analyses of two datasets, containing Apicomplexa and African coelacanth genomes respectively, reveal important structure from the second principal components.
Analysis of structural response data using discrete modal filters. M.S. Thesis
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.
1991-01-01
The application of reciprocal modal vectors to the analysis of structural response data is described. Reciprocal modal vectors are constructed using an existing experimental modal model and an existing frequency response matrix of a structure, and can be assembled into a matrix that effectively transforms the data from the physical space to a modal space within a particular frequency range. In other words, the weighting matrix necessary for modal vector orthogonality (typically the mass matrix) is contained within the reciprocal model matrix. The underlying goal of this work is mostly directed toward observing the modal state responses in the presence of unknown, possibly closed loop forcing functions, thus having an impact on both operating data analysis techniques and independent modal space control techniques. This study investigates the behavior of reciprocol modal vectors as modal filters with respect to certain calculation parameters and their performance with perturbed system frequency response data.
Modeling Musical Context With Word2Vec
NASA Astrophysics Data System (ADS)
Herremans, Dorien; Chuan, Ching-Hua
2017-05-01
We present a semantic vector space model for capturing complex polyphonic musical context. A word2vec model based on a skip-gram representation with negative sampling was used to model slices of music from a dataset of Beethoven's piano sonatas. A visualization of the reduced vector space using t-distributed stochastic neighbor embedding shows that the resulting embedded vector space captures tonal relationships, even without any explicit information about the musical contents of the slices. Secondly, an excerpt of the Moonlight Sonata from Beethoven was altered by replacing slices based on context similarity. The resulting music shows that the selected slice based on similar word2vec context also has a relatively short tonal distance from the original slice.
A vector scanning processing technique for pulsed laser velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Edwards, Robert V.
1989-01-01
Pulsed laser sheet velocimetry yields nonintrusive measurements of two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high precision (1 pct) velocity estimates, but can require several hours of processing time on specialized array processors. Under some circumstances, a simple, fast, less accurate (approx. 5 pct), data reduction technique which also gives unambiguous velocity vector information is acceptable. A direct space domain processing technique was examined. The direct space domain processing technique was found to be far superior to any other techniques known, in achieving the objectives listed above. It employs a new data coding and reduction technique, where the particle time history information is used directly. Further, it has no 180 deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 minutes on an 80386 based PC, producing a 2-D velocity vector map of the flow field. Hence, using this new space domain vector scanning (VS) technique, pulsed laser velocimetry data can be reduced quickly and reasonably accurately, without specialized array processing hardware.
NASA Astrophysics Data System (ADS)
Maruyama, Tomoyuki; Nakano, Eiji; Yanase, Kota; Yoshinaga, Naotaka
2018-06-01
The spontaneous spin polarization of strongly interacting matter due to axial-vector- and tensor-type interactions is studied at zero temperature and high baryon-number densities. We start with the mean-field Lagrangian for the axial-vector and tensor interaction channels and find in the chiral limit that the spin polarization due to the tensor mean field (U ) takes place first as the density increases for sufficiently strong coupling constants, and then the spin polarization due to the axial-vector mean field (A ) emerges in the region of the finite tensor mean field. This can be understood as making the axial-vector mean-field finite requires a broken chiral symmetry somehow, which is achieved by the finite tensor mean field in the present case. It is also found from the symmetry argument that there appear the type I (II) Nambu-Goldstone modes with a linear (quadratic) dispersion in the spin polarized phase with U ≠0 and A =0 (U ≠0 and A ≠0 ), although these two phases exhibit the same symmetry breaking pattern.
Wave Telescope Technique for MMS Magnetometer
NASA Technical Reports Server (NTRS)
Narita, Y.; Plaschke, F.; Nakamura, R.; Baumjojann, W.; Magnes, W.; Fischer, D.; Voros, Z.; Torbert, R. B.; Russell, C. T.; Strangeway, R. J.;
2016-01-01
Multipoint measurements are a powerful method in studying wavefields in space plasmas.The wave telescope technique is tested against magnetic field fluctuations in the terrestrial magnetosheath measured by the four Magnetospheric Multiscale (MMS) spacecraft on a spatial scale of about 20 km.The dispersion relation diagram and the wave vector distribution are determined for the first time in the ion-kinetic range. Moreover, the dispersion relation diagram is determined in a proxy plasma restframe by regarding the low-frequency dispersion relation as a Doppler relation and compensating for the apparent phase velocity. Fluctuations are highly compressible, and the wave vectors have an angle of about 60 from the mean magnetic field. We interpret that the measured fluctuations represent akinetic-drift mirror mode in the magnetosheath which is dispersive and in a turbulent state accompanied by a sideband formation.
Technique of retinal gene therapy: delivery of viral vector into the subretinal space
Xue, K; Groppe, M; Salvetti, A P; MacLaren, R E
2017-01-01
Purpose Safe and reproducible delivery of gene therapy vector into the subretinal space is essential for successful targeting of the retinal pigment epithelium (RPE) and photoreceptors. The success of surgery is critical for the clinical efficacy of retinal gene therapy. Iatrogenic detachment of the degenerate (often adherent) retina in patients with hereditary retinal degenerations and small volume (eg, 0.1 ml) subretinal injections pose new surgical challenges. Methods Our subretinal gene therapy technique involved pre-operative planning with optical coherence tomography (OCT) and autofluorescence (AF) imaging, 23 G pars plana vitrectomy, internal limiting membrane staining with Membrane Blue Dual (DORC BV, Zuidland, Netherlands), a two-step subretinal injection using a 41 G Teflon tipped cannula (DORC) first with normal saline to create a parafoveal bleb followed by slow infusion of viral vector via the same self-sealing retinotomy. Surgical precision was further enhanced by intraoperative OCT (Zeiss Rescan 7000, Carl Zeiss Meditec AG, Jena, Germany). Foveal functional and structural recovery was evaluated using best-corrected Early Treatment Diabetic Retinopathy Study (ETDRS) visual acuity, microperimetry and OCT. Results Two patients with choroideremia aged 29 (P1) and 27 (P2) years, who had normal and symmetrical levels of best-corrected visual acuity (BCVA) in both eyes, underwent unilateral gene therapy with the fellow eye acting as internal control. The surgeries were uncomplicated in both cases with successful detachment of the macula by subretinal vector injection. Both treated eyes showed recovery of BCVA (P1: 76–77 letters; P2: 84–88 letters) and mean threshold sensitivity of the central macula (P1: 10.7–10.7 dB; P2: 14.2–14.1 dB) to baseline within a month. This was accompanied by normalisation of central retinal thickness on OCT. Conclusions Herein we describe a reliable technique for subretinal gene therapy, which is currently used in clinical trials to treat choroideremia using an adeno-associated viral (AAV) vector encoding the CHM gene. Strategies to minimise potential complications, such as avoidance of excessive retinal stretch, air bubbles within the injection system, reflux of viral vector and post-operative vitritis are discussed. PMID:28820183
Geometric Representations of Condition Queries on Three-Dimensional Vector Fields
NASA Technical Reports Server (NTRS)
Henze, Chris
1999-01-01
Condition queries on distributed data ask where particular conditions are satisfied. It is possible to represent condition queries as geometric objects by plotting field data in various spaces derived from the data, and by selecting loci within these derived spaces which signify the desired conditions. Rather simple geometric partitions of derived spaces can represent complex condition queries because much complexity can be encapsulated in the derived space mapping itself A geometric view of condition queries provides a useful conceptual unification, allowing one to intuitively understand many existing vector field feature detection algorithms -- and to design new ones -- as variations on a common theme. A geometric representation of condition queries also provides a simple and coherent basis for computer implementation, reducing a wide variety of existing and potential vector field feature detection techniques to a few simple geometric operations.
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
A note on φ-analytic conformal vector fields
NASA Astrophysics Data System (ADS)
Deshmukh, Sharief; Bin Turki, Nasser
2017-09-01
Taking clue from the analytic vector fields on a complex manifold, φ-analytic conformal vector fields are defined on a Riemannian manifold (Deshmukh and Al-Solamy in Colloq. Math. 112(1):157-161, 2008). In this paper, we use φ-analytic conformal vector fields to find new characterizations of the n-sphere Sn(c) and the Euclidean space (Rn,<,> ).
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
Exploratory Model Analysis of the Space Based Infrared System (SBIRS) Low Global Scheduler Problem
1999-12-01
solution. The non- linear least squares model is defined as Y = f{e,t) where: 0 =M-element parameter vector Y =N-element vector of all data t...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS EXPLORATORY MODEL ANALYSIS OF THE SPACE BASED INFRARED SYSTEM (SBIRS) LOW GLOBAL SCHEDULER...December 1999 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE EXPLORATORY MODEL ANALYSIS OF THE SPACE BASED INFRARED SYSTEM
Evaluation of candidate geomagnetic field models for IGRF-11
NASA Astrophysics Data System (ADS)
Finlay, C. C.; Maus, S.; Beggan, C. D.; Hamoudi, M.; Lowes, F. J.; Olsen, N.; Thébault, E.
2010-10-01
The eleventh generation of the International Geomagnetic Reference Field (IGRF) was agreed in December 2009 by a task force appointed by the International Association of Geomagnetism and Aeronomy (IAGA) Division V Working Group V-MOD. New spherical harmonic main field models for epochs 2005.0 (DGRF-2005) and 2010.0 (IGRF-2010), and predictive linear secular variation for the interval 2010.0-2015.0 (SV-2010-2015) were derived from weighted averages of candidate models submitted by teams led by DTU Space, Denmark (team A); NOAA/NGDC, U.S.A. (team B); BGS, U.K. (team C); IZMIRAN, Russia (team D); EOST, France (team E); IPGP, France (team F); GFZ, Germany (team G) and NASA-GSFC, U.S.A. (team H). Here, we report the evaluations of candidate models carried out by the IGRF-11 task force during October/November 2009 and describe the weightings used to derive the new IGRF-11 model. The evaluations include calculations of root mean square vector field differences between the candidates, comparisons of the power spectra, and degree correlations between the candidates and a mean model. Coefficient by coefficient analysis including determination of weighting factors used in a robust estimation of mean coefficients is also reported. Maps of differences in the vertical field intensity at Earth's surface between the candidates and weighted mean models are presented. Candidates with anomalous aspects are identified and efforts made to pinpoint both troublesome coefficients and geographical regions where large variations between candidates originate. A retrospective analysis of IGRF-10 main field candidates for epoch 2005.0 and predictive secular variation candidates for 2005.0-2010.0 using the new IGRF-11 models as a reference is also reported. The high quality and consistency of main field models derived using vector satellite data is demonstrated; based on internal consistency DGRF-2005 has a formal root mean square vector field error over Earth's surface of 1.0 nT. Difficulties nevertheless remain in accurately forecasting field evolution only five years into the future.
Onwujekwe, Obinna; Malik, El-Fatih Mohamed; Mustafa, Sara Hassan; Mnzava, Abraham
2005-01-01
Background In order to optimally prioritize and use public and private budgets for equitable malaria vector control, there is a need to determine the level and determinants of consumer demand for different vector control tools. Objectives To determine the demand from people of different socio-economic groups for indoor residual house-spraying (IRHS), insecticide-treated nets (ITNs), larviciding with chemicals (LWC), and space spraying/fogging (SS) and the disease control implications of the result. Methods Ratings and levels of willingness-to-pay (WTP) for the vector control tools were determined using a random cross-sectional sample of 720 householdes drawn from two states. WTP was elicited using the bidding game. An asset-based socio-economic status (SES) index was used to explore whether WTP was related to SES of the respondents. Results IRHS received the highest proportion of highest preferred rating (41.0%) followed by ITNs (23.1%). However, ITNs had the highest mean WTP followed by IRHS, while LWC had the least. The regression analysis showed that SES was positively and statistically significantly related to WTP across the four vector control tools and that the respondents' rating of IRHS and ITNs significantly explained their levels of WTP for the two tools. Conclusion People were willing to pay for all the vector-control tools, but the demand for the vector control tools was related to the SES of the respondents. Hence, it is vital that there are public policies and financing mechanisms to ensure equitable provision and utilisation of vector control tools, as well as protecting the poor from cost-sharing arrangements. PMID:16356177
Vertebrate Development in Space: Gravity Is a Drag (and Has Been for Eons and Eons)
NASA Technical Reports Server (NTRS)
Keefe, J. R.
1985-01-01
Brief sketches of developmental biology studies during spaceflight presented are intended to be complete in scope and to provide the reader with an overview of the present status of such studies. Means of evaluating both the direct role of gravity on all processes of mammalian reproduction and development as well as defining the means of assessing indirect transplacemental aspects are considered. The potential present in the development of a spaceflight system/program specifically designed to provide chronic exposure of a representative variety of mammalian species with periodic sampling for multiple generations to fully assess the potential impact of an altered gravitational vector on general mammalian development is also considered.
An Elementary Treatment of General Inner Products
ERIC Educational Resources Information Center
Graver, Jack E.
2011-01-01
A typical first course on linear algebra is usually restricted to vector spaces over the real numbers and the usual positive-definite inner product. Hence, the proof that dim(S)+ dim(S[perpendicular]) = dim("V") is not presented in a way that is generalizable to non-positive?definite inner products or to vector spaces over other fields. In this…
Wigner functions on non-standard symplectic vector spaces
NASA Astrophysics Data System (ADS)
Dias, Nuno Costa; Prata, João Nuno
2018-01-01
We consider the Weyl quantization on a flat non-standard symplectic vector space. We focus mainly on the properties of the Wigner functions defined therein. In particular we show that the sets of Wigner functions on distinct symplectic spaces are different but have non-empty intersections. This extends previous results to arbitrary dimension and arbitrary (constant) symplectic structure. As a by-product we introduce and prove several concepts and results on non-standard symplectic spaces which generalize those on the standard symplectic space, namely, the symplectic spectrum, Williamson's theorem, and Narcowich-Wigner spectra. We also show how Wigner functions on non-standard symplectic spaces behave under the action of an arbitrary linear coordinate transformation.
Cellular Mechanisms of Gravitropic Response in Higher Plants
NASA Astrophysics Data System (ADS)
Medvedev, Sergei; Smolikova, Galina; Pozhvanov, Gregory; Suslov, Dmitry
The evolutionary success of land plants in adaptation to the vectorial environmental factors was based mainly on the development of polarity systems. In result, normal plant ontogenesis is based on the positional information. Polarity is a tool by which the developing plant organs and tissues are mapped and the specific three-dimensional structure of the organism is created. It is due to their polar organization plants are able to orient themselves relative to the gravity vector and different vectorial cues, and to respond adequately to various stimuli. Gravitation is one of the most important polarized environmental factor that guides the development of plant organisms in space. Every plant can "estimate" its position relative to the gravity vector and correct it, if necessary, by means of polarized growth. The direction and the magnitude of gravitational stimulus are constant during the whole plant ontogenesis. The key plant response to the action of gravity is gravitropism, i.e. the directed growth of organs with respect to the gravity vector. This response is a very convenient model to study the mechanisms of plant orientation in space. The present report is focused on the main cellular mechanisms responsible for graviropic bending in higher plants. These mechanisms and structures include electric polarization of plant cells, Ca ({2+) }gradients, cytoskeleton, G-proteins, phosphoinositides and the machinery responsible for asymmetric auxin distribution. Those mechanisms tightly interact demonstrating some hierarchy and multiple feedbacks. The Ca (2+) gradients provide the primary physiological basis of polarity in plant cells. Calcium ions influence on the bioelectric potentials, the organization of actin cytoskeleton, the activity of Ca (2+) -binding proteins and Ca (2+) -dependent protein kinases. Protein kinases modulate transcription factors activity thereby regulating the gene expression and switching the developmental programs. Actin cytoskeleton affects the molecular machinery of polar auxin transport. It results in the changes of auxin gradients in plant organs and tissues, which modulate all cellular mechanisms of polarity via multiple feedback loops. The understanding of the mechanisms of plant organism orientation relative to the gravity vector will allow us to develop efficient technologies for plant growing in microgravity conditions at orbital space stations and during long piloted space flights. This work was supported by the grant of Russian Foundation for Basic Research (N 14-04-01-624) and by the grant of St.-Petersburg State University (N 1.38.233.2014).
Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets.
Demartines, P; Herault, J
1997-01-01
We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.
Effective Numerical Methods for Solving Elliptical Problems in Strengthened Sobolev Spaces
NASA Technical Reports Server (NTRS)
D'yakonov, Eugene G.
1996-01-01
Fourth-order elliptic boundary value problems in the plane can be reduced to operator equations in Hilbert spaces G that are certain subspaces of the Sobolev space W(sub 2)(exp 2)(Omega) is identical with G(sup (2)). Appearance of asymptotically optimal algorithms for Stokes type problems made it natural to focus on an approach that considers rot w is identical with (D(sub 2)w - D(sub 1)w) is identical with vector of u as a new unknown vector-function, which automatically satisfies the condition div vector of u = 0. In this work, we show that this approach can also be developed for an important class of problems from the theory of plates and shells with stiffeners. The main mathematical problem was to show that the well-known inf-sup condition (normal solvability of the divergence operator) holds for special Hilbert spaces. This result is also essential for certain hydrodynamics problems.
Thrust vector control using electric actuation
NASA Astrophysics Data System (ADS)
Bechtel, Robert T.; Hall, David K.
1995-01-01
Presently, gimbaling of launch vehicle engines for thrust vector control is generally accomplished using a hydraulic system. In the case of the space shuttle solid rocket boosters and main engines, these systems are powered by hydrazine auxiliary power units. Use of electromechanical actuators would provide significant advantages in cost and maintenance. However, present energy source technologies such as batteries are heavy to the point of causing significant weight penalties. Utilizing capacitor technology developed by the Auburn University Space Power Institute in collaboration with the Auburn CCDS, Marshall Space Flight Center (MSFC) and Auburn are developing EMA system components with emphasis on high discharge rate energy sources compatible with space shuttle type thrust vector control requirements. Testing has been done at MSFC as part of EMA system tests with loads up to 66000 newtons for pulse times of several seconds. Results show such an approach to be feasible providing a potential for reduced weight and operations costs for new launch vehicles.
An emergence of coordinated communication in populations of agents.
Kvasnicka, V; Pospichal, J
1999-01-01
The purpose of this article is to demonstrate that coordinated communication spontaneously emerges in a population composed of agents that are capable of specific cognitive activities. Internal states of agents are characterized by meaning vectors. Simple neural networks composed of one layer of hidden neurons perform cognitive activities of agents. An elementary communication act consists of the following: (a) two agents are selected, where one of them is declared the speaker and the other the listener; (b) the speaker codes a selected meaning vector onto a sequence of symbols and sends it to the listener as a message; and finally, (c) the listener decodes this message into a meaning vector and adapts his or her neural network such that the differences between speaker and listener meaning vectors are decreased. A Darwinian evolution enlarged by ideas from the Baldwin effect and Dawkins' memes is simulated by a simple version of an evolutionary algorithm without crossover. The agent fitness is determined by success of the mutual pairwise communications. It is demonstrated that agents in the course of evolution gradually do a better job of decoding received messages (they are closer to meaning vectors of speakers) and all agents gradually start to use the same vocabulary for the common communication. Moreover, if agent meaning vectors contain regularities, then these regularities are manifested also in messages created by agent speakers, that is, similar parts of meaning vectors are coded by similar symbol substrings. This observation is considered a manifestation of the emergence of a grammar system in the common coordinated communication.
Dynamic analysis of suspension cable based on vector form intrinsic finite element method
NASA Astrophysics Data System (ADS)
Qin, Jian; Qiao, Liang; Wan, Jiancheng; Jiang, Ming; Xia, Yongjun
2017-10-01
A vector finite element method is presented for the dynamic analysis of cable structures based on the vector form intrinsic finite element (VFIFE) and mechanical properties of suspension cable. Firstly, the suspension cable is discretized into different elements by space points, the mass and external forces of suspension cable are transformed into space points. The structural form of cable is described by the space points at different time. The equations of motion for the space points are established according to the Newton’s second law. Then, the element internal forces between the space points are derived from the flexible truss structure. Finally, the motion equations of space points are solved by the central difference method with reasonable time integration step. The tangential tension of the bearing rope in a test ropeway with the moving concentrated loads is calculated and compared with the experimental data. The results show that the tangential tension of suspension cable with moving loads is consistent with the experimental data. This method has high calculated precision and meets the requirements of engineering application.
Evolution of passive scalar statistics in a spatially developing turbulence
NASA Astrophysics Data System (ADS)
Paul, I.; Papadakis, G.; Vassilicos, J. C.
2018-02-01
We investigate the evolution of passive scalar statistics in a spatially developing turbulence using direct numerical simulation. Turbulence is generated by a square grid element, which is heated continuously, and the passive scalar is temperature. The square element is the fundamental building block for both regular and fractal grids. We trace the dominant mechanisms responsible for the dynamical evolution of scalar-variance and its dissipation along the bar and grid-element centerlines. The scalar-variance is generated predominantly by the action of the mean scalar gradient behind the bar and is transported laterally by turbulent fluctuations to the grid-element centerline. The scalar-variance dissipation (proportional to the scalar-gradient variance) is produced primarily by the compression of the fluctuating scalar-gradient vector by the turbulent strain rate, while the contribution of mean velocity and scalar fields is negligible. Close to the grid element the scalar spectrum exhibits a well-defined -5 /3 power-law, even though the basic premises of the Kolmogorov-Obukhov-Corrsin theory are not satisfied (the fluctuating scalar field is highly intermittent, inhomogeneous, and anisotropic, and the local Corrsin-microscale-Péclet number is small). At this location, the PDF of scalar gradient production is only slightly skewed towards positive, and the fluctuating scalar-gradient vector aligns only with the compressive strain-rate eigenvector. The scalar-gradient vector is stretched or compressed stronger than the vorticity vector by turbulent strain rate throughout the grid-element centerline. However, the alignment of the former changes much earlier in space than that of the latter, resulting in scalar-variance dissipation to decay earlier along the grid-element centerline compared to the turbulent kinetic energy dissipation. The universal alignment behavior of the scalar-gradient vector is found far downstream, although the local Reynolds and Péclet numbers (based on the Taylor and Corrsin length scales, respectively) are low.
Pattern-histogram-based temporal change detection using personal chest radiographs
NASA Astrophysics Data System (ADS)
Ugurlu, Yucel; Obi, Takashi; Hasegawa, Akira; Yamaguchi, Masahiro; Ohyama, Nagaaki
1999-05-01
An accurate and reliable detection of temporal changes from a pair of images has considerable interest in the medical science. Traditional registration and subtraction techniques can be applied to extract temporal differences when,the object is rigid or corresponding points are obvious. However, in radiological imaging, loss of the depth information, the elasticity of object, the absence of clearly defined landmarks and three-dimensional positioning differences constraint the performance of conventional registration techniques. In this paper, we propose a new method in order to detect interval changes accurately without using an image registration technique. The method is based on construction of so-called pattern histogram and comparison procedure. The pattern histogram is a graphic representation of the frequency counts of all allowable patterns in the multi-dimensional pattern vector space. K-means algorithm is employed to partition pattern vector space successively. Any differences in the pattern histograms imply that different patterns are involved in the scenes. In our experiment, a pair of chest radiographs of pneumoconiosis is employed and the changing histogram bins are visualized on both of the images. We found that the method can be used as an alternative way of temporal change detection, particularly when the precise image registration is not available.
The Space Geodesy Project and Radio Frequency Interference Characterization and Mitigation
NASA Technical Reports Server (NTRS)
Lawrence, Hilliard M.; Beaudoin, C.; Corey, B. E.; Tourain, C. L.; Petrachenko, B.; Dickey, John
2013-01-01
The Space Geodesy Project (SGP) development by NASA is an effort to co-locate the four international geodetic techniques Satellite Laser Ranging (SLR) and Lunar Laser Ranging (LLR), Very Long Baseline Interferometry (VLBI), Global Navigation Satellite System (GNSS), and Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) into one tightly referenced campus and coordinated reference frame analysis. The SGP requirement locates these stations within a small area to maintain line-of-sight and frequent automated survey known as the vector tie system. This causes a direct conflict with the new broadband VLBI technique. Broadband means 2-14 GHz, and RFI susceptibility at -80 dBW or higher due to sensitive RF components in the front end of the radio receiver.
Current algebra, statistical mechanics and quantum models
NASA Astrophysics Data System (ADS)
Vilela Mendes, R.
2017-11-01
Results obtained in the past for free boson systems at zero and nonzero temperatures are revisited to clarify the physical meaning of current algebra reducible functionals which are associated to systems with density fluctuations, leading to observable effects on phase transitions. To use current algebra as a tool for the formulation of quantum statistical mechanics amounts to the construction of unitary representations of diffeomorphism groups. Two mathematical equivalent procedures exist for this purpose. One searches for quasi-invariant measures on configuration spaces, the other for a cyclic vector in Hilbert space. Here, one argues that the second approach is closer to the physical intuition when modelling complex systems. An example of application of the current algebra methodology to the pairing phenomenon in two-dimensional fermion systems is discussed.
Characterization of dual-polarization LTE radio over a free-space optical turbulence channel.
Bohata, J; Zvanovec, S; Korinek, T; Mansour Abadi, M; Ghassemlooy, Z
2015-08-10
A dual polarization (DP) radio over a free-space optical (FSO) communication link using a long-term evolution (LTE) radio signal is proposed and analyzed under different turbulence channel conditions. Radio signal transmission over the DP FSO channel is experimentally verified by means of error vector magnitude (EVM) statistics. We demonstrate that such a system, employing a 64 quadrature amplitude modulation at the frequency bands of 800 MHz and 2.6 GHz, evinces reliability with <8% of EVM in a turbulent channel. Based on the results, we show that transmitting the LTE signal over the FSO channel is a potential solution for last-mile access or backbone networks, when using multiple-input multiple-output based DP signals.
NASA Technical Reports Server (NTRS)
Balasubramaniam, K. S.; West, E. A.
1991-01-01
The Marshall Space Flight Center (MSFC) vector magnetograph is a tunable filter magnetograph with a bandpass of 125 mA. Results are presented of the inversion of Stokes polarization profiles observed with the MSFC vector magnetograph centered on a sunspot to recover the vector magnetic field parameters and thermodynamic parameters of the spectral line forming region using the Fe I 5250.2 A spectral line using a nonlinear least-squares fitting technique. As a preliminary investigation, it is also shown that the recovered thermodynamic parameters could be better understood if the fitted parameters like Doppler width, opacity ratio, and damping constant were broken down into more basic quantities like temperature, microturbulent velocity, or density parameter.
Characteristic classes of gauge systems
NASA Astrophysics Data System (ADS)
Lyakhovich, S. L.; Sharapov, A. A.
2004-12-01
We define and study invariants which can be uniformly constructed for any gauge system. By a gauge system we understand an (anti-)Poisson supermanifold provided with an odd Hamiltonian self-commuting vector field called a homological vector field. This definition encompasses all the cases usually included into the notion of a gauge theory in physics as well as some other similar (but different) structures like Lie or Courant algebroids. For Lagrangian gauge theories or Hamiltonian first class constrained systems, the homological vector field is identified with the classical BRST transformation operator. We define characteristic classes of a gauge system as universal cohomology classes of the homological vector field, which are uniformly constructed in terms of this vector field itself. Not striving to exhaustively classify all the characteristic classes in this work, we compute those invariants which are built up in terms of the first derivatives of the homological vector field. We also consider the cohomological operations in the space of all the characteristic classes. In particular, we show that the (anti-)Poisson bracket becomes trivial when applied to the space of all the characteristic classes, instead the latter space can be endowed with another Lie bracket operation. Making use of this Lie bracket one can generate new characteristic classes involving higher derivatives of the homological vector field. The simplest characteristic classes are illustrated by the examples relating them to anomalies in the traditional BV or BFV-BRST theory and to characteristic classes of (singular) foliations.
NASA Astrophysics Data System (ADS)
Liu, Tuo; Zhu, Xuefeng; Chen, Fei; Liang, Shanjun; Zhu, Jie
2018-03-01
Exploring the concept of non-Hermitian Hamiltonians respecting parity-time symmetry with classical wave systems is of great interest as it enables the experimental investigation of parity-time-symmetric systems through the quantum-classical analogue. Here, we demonstrate unidirectional wave vector manipulation in two-dimensional space, with an all passive acoustic parity-time-symmetric metamaterials crystal. The metamaterials crystal is constructed through interleaving groove- and holey-structured acoustic metamaterials to provide an intrinsic parity-time-symmetric potential that is two-dimensionally extended and curved, which allows the flexible manipulation of unpaired wave vectors. At the transition point from the unbroken to broken parity-time symmetry phase, the unidirectional sound focusing effect (along with reflectionless acoustic transparency in the opposite direction) is experimentally realized over the spectrum. This demonstration confirms the capability of passive acoustic systems to carry the experimental studies on general parity-time symmetry physics and further reveals the unique functionalities enabled by the judiciously tailored unidirectional wave vectors in space.
On Anholonomic Deformation, Geometry, and Differentiation
2013-02-01
αβχ are not necessarily Levi - Civita connection coefficients). The vector cross product × obeys, for two vectors V and W and two covectors α and β , V...three-dimensional space. 2.2.5. Euclidean space. Let GAB(X ) = GA · GB be the metric tensor of the space. The Levi - Civita connection coefficients of GAB...curvature tensor of the Levi - Civita connection vanishes identically: G R A BCD = 2 ( ∂[B G A C]D + G A[B|E|G EC]D ) = 0. (43) In n
Differential Calculus on h-Deformed Spaces
NASA Astrophysics Data System (ADS)
Herlemont, Basile; Ogievetsky, Oleg
2017-10-01
We construct the rings of generalized differential operators on the h-deformed vector space of gl-type. In contrast to the q-deformed vector space, where the ring of differential operators is unique up to an isomorphism, the general ring of h-deformed differential operators {Diff}_{h},σ(n) is labeled by a rational function σ in n variables, satisfying an over-determined system of finite-difference equations. We obtain the general solution of the system and describe some properties of the rings {Diff}_{h},σ(n).
Ben Salem, Samira; Bacha, Khmais; Chaari, Abdelkader
2012-09-01
In this work we suggest an original fault signature based on an improved combination of Hilbert and Park transforms. Starting from this combination we can create two fault signatures: Hilbert modulus current space vector (HMCSV) and Hilbert phase current space vector (HPCSV). These two fault signatures are subsequently analysed using the classical fast Fourier transform (FFT). The effects of mechanical faults on the HMCSV and HPCSV spectrums are described, and the related frequencies are determined. The magnitudes of spectral components, relative to the studied faults (air-gap eccentricity and outer raceway ball bearing defect), are extracted in order to develop the input vector necessary for learning and testing the support vector machine with an aim of classifying automatically the various states of the induction motor. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Evolution of Lamb Vector as a Vortex Breaking into Turbulence.
NASA Astrophysics Data System (ADS)
Wu, J. Z.; Lu, X. Y.
1996-11-01
In an incompressible flow, either laminar or turbulent, the Lamb vector is solely responsible to nonlinear interactions. While its longitudinal part is balanced by stagnation enthalpy, its transverse part is the unique source (as an external forcing in spectral space) that causes the flow to evolve. Moreover, in Reynolds-averaged flows the turbulent force can be derived exclusively from the Lamb vector instead of the full Reynolds stress tensor. Therefore, studying the evolution of the Lamb vector itself (both longitudinal and transverse parts) is of great interest. We have numerically examined this problem, taking the nonlinear distabilization of a viscous vortex as an example. In the later stage of this evolution we introduced a forcing to keep a statistically steady state, and observed the Lamb vector behavior in the resulting fine turbulence. The result is presented in both physical and spectral spaces.
Optoelectronic Inner-Product Neural Associative Memory
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang
1993-01-01
Optoelectronic apparatus acts as artificial neural network performing associative recall of binary images. Recall process is iterative one involving optical computation of inner products between binary input vector and one or more reference binary vectors in memory. Inner-product method requires far less memory space than matrix-vector method.
Human pose tracking from monocular video by traversing an image motion mapped body pose manifold
NASA Astrophysics Data System (ADS)
Basu, Saurav; Poulin, Joshua; Acton, Scott T.
2010-01-01
Tracking human pose from monocular video sequences is a challenging problem due to the large number of independent parameters affecting image appearance and nonlinear relationships between generating parameters and the resultant images. Unlike the current practice of fitting interpolation functions to point correspondences between underlying pose parameters and image appearance, we exploit the relationship between pose parameters and image motion flow vectors in a physically meaningful way. Change in image appearance due to pose change is realized as navigating a low dimensional submanifold of the infinite dimensional Lie group of diffeomorphisms of the two dimensional sphere S2. For small changes in pose, image motion flow vectors lie on the tangent space of the submanifold. Any observed image motion flow vector field is decomposed into the basis motion vector flow fields on the tangent space and combination weights are used to update corresponding pose changes in the different dimensions of the pose parameter space. Image motion flow vectors are largely invariant to style changes in experiments with synthetic and real data where the subjects exhibit variation in appearance and clothing. The experiments demonstrate the robustness of our method (within +/-4° of ground truth) to style variance.
Unsupervised color image segmentation using a lattice algebra clustering technique
NASA Astrophysics Data System (ADS)
Urcid, Gonzalo; Ritter, Gerhard X.
2011-08-01
In this paper we introduce a lattice algebra clustering technique for segmenting digital images in the Red-Green- Blue (RGB) color space. The proposed technique is a two step procedure. Given an input color image, the first step determines the finite set of its extreme pixel vectors within the color cube by means of the scaled min-W and max-M lattice auto-associative memory matrices, including the minimum and maximum vector bounds. In the second step, maximal rectangular boxes enclosing each extreme color pixel are found using the Chebychev distance between color pixels; afterwards, clustering is performed by assigning each image pixel to its corresponding maximal box. The two steps in our proposed method are completely unsupervised or autonomous. Illustrative examples are provided to demonstrate the color segmentation results including a brief numerical comparison with two other non-maximal variations of the same clustering technique.
Intertwining solutions for magnetic relativistic Hartree type equations
NASA Astrophysics Data System (ADS)
Cingolani, Silvia; Secchi, Simone
2018-05-01
We consider the magnetic pseudo-relativistic Schrödinger equation where , m > 0, is an external continuous scalar potential, is a continuous vector potential and is a convolution kernel, is a constant, , . We assume that A and V are symmetric with respect to a closed subgroup G of the group of orthogonal linear transformations of . If for any , the cardinality of the G-orbit of x is infinite, then we prove the existence of infinitely many intertwining solutions assuming that is either linear in x or uniformly bounded. The results are proved by means of a new local realization of the square root of the magnetic laplacian to a local elliptic operator with Neumann boundary condition on a half-space. Moreover we derive an existence result of a ground state intertwining solution for bounded vector potentials, if G admits a finite orbit.
Clustering Tree-structured Data on Manifold
Lu, Na; Miao, Hongyu
2016-01-01
Tree-structured data usually contain both topological and geometrical information, and are necessarily considered on manifold instead of Euclidean space for appropriate data parameterization and analysis. In this study, we propose a novel tree-structured data parameterization, called Topology-Attribute matrix (T-A matrix), so the data clustering task can be conducted on matrix manifold. We incorporate the structure constraints embedded in data into the non-negative matrix factorization method to determine meta-trees from the T-A matrix, and the signature vector of each single tree can then be extracted by meta-tree decomposition. The meta-tree space turns out to be a cone space, in which we explore the distance metric and implement the clustering algorithm based on the concepts like Fréchet mean. Finally, the T-A matrix based clustering (TAMBAC) framework is evaluated and compared using both simulated data and real retinal images to illus trate its efficiency and accuracy. PMID:26660696
An Investigation of the Jetevator as a Means of Thrust Vector Control
1958-02-01
actual rocket firings. Description of the Tests The cold-flow jetevator tcsts were conduc.ted in the engine test cells of the Ordnance Aerophysics...45 and 210 psia, as noted on the figures. The cel. pres- sure was adjusted to give a ratio of supply pressure to cell pressure of approximately 37...CORPORATO t. r .U and SPACE DIVISION - FDN LMSD-2630 °; •GN F.]DE NT1 .A.L`. -[, GAP DEFLECTED NOZZLE JETEVATOR FLOW 6 =220 JETEVATOR .°=60O HINGE POINT
NASA Astrophysics Data System (ADS)
Kurniati, Devi; Hoyyi, Abdul; Widiharih, Tatik
2018-05-01
Time series data is a series of data taken or measured based on observations at the same time interval. Time series data analysis is used to perform data analysis considering the effect of time. The purpose of time series analysis is to know the characteristics and patterns of a data and predict a data value in some future period based on data in the past. One of the forecasting methods used for time series data is the state space model. This study discusses the modeling and forecasting of electric energy consumption using the state space model for univariate data. The modeling stage is began with optimal Autoregressive (AR) order selection, determination of state vector through canonical correlation analysis, estimation of parameter, and forecasting. The result of this research shows that modeling of electric energy consumption using state space model of order 4 with Mean Absolute Percentage Error (MAPE) value 3.655%, so the model is very good forecasting category.
Computational model of a vector-mediated epidemic
NASA Astrophysics Data System (ADS)
Dickman, Adriana Gomes; Dickman, Ronald
2015-05-01
We discuss a lattice model of vector-mediated transmission of a disease to illustrate how simulations can be applied in epidemiology. The population consists of two species, human hosts and vectors, which contract the disease from one another. Hosts are sedentary, while vectors (mosquitoes) diffuse in space. Examples of such diseases are malaria, dengue fever, and Pierce's disease in vineyards. The model exhibits a phase transition between an absorbing (infection free) phase and an active one as parameters such as infection rates and vector density are varied.
Modeling Interferometric Structures with Birefringent Elements: A Linear Vector-Space Formalism
2013-11-12
Annapolis, Maryland ViNceNt J. Urick FraNk BUcholtz Photonics Technology Branch Optical Sciences Division i REPORT DOCUMENTATION PAGE Form...a Linear Vector-Space Formalism Nicholas J. Frigo,1 Vincent J. Urick , and Frank Bucholtz Naval Research Laboratory, Code 5650 4555 Overlook Avenue, SW...Annapolis, MD Unclassified Unlimited Unclassified Unlimited Unclassified Unlimited Unclassified Unlimited 29 Vincent J. Urick (202) 767-9352 Coupled mode
On the n-symplectic structure of faithful irreducible representations
NASA Astrophysics Data System (ADS)
Norris, L. K.
2017-04-01
Each faithful irreducible representation of an N-dimensional vector space V1 on an n-dimensional vector space V2 is shown to define a unique irreducible n-symplectic structure on the product manifold V1×V2 . The basic details of the associated Poisson algebra are developed for the special case N = n2, and 2n-dimensional symplectic submanifolds are shown to exist.
A phenomenological calculus of Wiener description space.
Richardson, I W; Louie, A H
2007-10-01
The phenomenological calculus is a categorical example of Robert Rosen's modeling relation. This paper is an alligation of the phenomenological calculus and generalized harmonic analysis, another categorical example. Our epistemological exploration continues into the realm of Wiener description space, in which constitutive parameters are extended from vectors to vector-valued functions of a real variable. Inherent in the phenomenology are fundamental representations of time and nearness to equilibrium.
Recent Selected Papers of Northwestern Polytechnical University in Two Parts, Part II.
1981-08-28
OF CONTENTS Page Dual Properties of Elastic Structures 1 Matrix Analysis of Wings 76 On a Method for the Determination of Plane Stress Fracture...I= Ea]{(x, v,z) j l~i l’m mini The equation above means that the cisplacement function vector determines the strain function vector. (Assumption II...means that the distributed load function vector is determined by the stress function vector. In Section 1, there was an analysis of a three
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berres, Anne Sabine
This slide presentation describes basic topological concepts, including topological spaces, homeomorphisms, homotopy, betti numbers. Scalar field topology explores finding topological features and scalar field visualization, and vector field topology explores finding topological features and vector field visualization.
NASA Astrophysics Data System (ADS)
Milione, Giovanni; Lavery, Martin P. J.; Huang, Hao; Ren, Yongxiong; Xie, Guodong; Nguyen, Thien An; Karimi, Ebrahim; Marrucci, Lorenzo; Nolan, Daniel A.; Alfano, Robert R.; Willner, Alan E.
2015-05-01
Vector modes are spatial modes that have spatially inhomogeneous states of polarization, such as, radial and azimuthal polarization. They can produce smaller spot sizes and stronger longitudinal polarization components upon focusing. As a result, they are used for many applications, including optical trapping and nanoscale imaging. In this work, vector modes are used to increase the information capacity of free space optical communication via the method of optical communication referred to as mode division multiplexing. A mode (de)multiplexer for vector modes based on a liquid crystal technology referred to as a q-plate is introduced. As a proof of principle, using the mode (de)multiplexer four vector modes each carrying a 20 Gbit/s quadrature phase shift keying signal on a single wavelength channel (~1550nm), comprising an aggregate 80 Gbit/s, were transmitted ~1m over the lab table with <-16.4 dB (<2%) mode crosstalk. Bit error rates for all vector modes were measured at the forward error correction threshold with power penalties < 3.41dB.
de Melo, Diogo Portella Ornelas; Scherrer, Luciano Rios; Eiras, Álvaro Eduardo
2012-01-01
The use of vector surveillance tools for preventing dengue disease requires fine assessment of risk, in order to improve vector control activities. Nevertheless, the thresholds between vector detection and dengue fever occurrence are currently not well established. In Belo Horizonte (Minas Gerais, Brazil), dengue has been endemic for several years. From January 2007 to June 2008, the dengue vector Aedes (Stegomyia) aegypti was monitored by ovitrap, the sticky-trap MosquiTRAP™ and larval surveys in an study area in Belo Horizonte. Using a space-time scan for clusters detection implemented in SaTScan software, the vector presence recorded by the different monitoring methods was evaluated. Clusters of vectors and dengue fever were detected. It was verified that ovitrap and MosquiTRAP vector detection methods predicted dengue occurrence better than larval survey, both spatially and temporally. MosquiTRAP and ovitrap presented similar results of space-time intersections to dengue fever clusters. Nevertheless ovitrap clusters presented longer duration periods than MosquiTRAP ones, less acuratelly signalizing the dengue risk areas, since the detection of vector clusters during most of the study period was not necessarily correlated to dengue fever occurrence. It was verified that ovitrap clusters occurred more than 200 days (values ranged from 97.0±35.35 to 283.0±168.4 days) before dengue fever clusters, whereas MosquiTRAP clusters preceded dengue fever clusters by approximately 80 days (values ranged from 65.5±58.7 to 94.0±14. 3 days), the former showing to be more temporally precise. Thus, in the present cluster analysis study MosquiTRAP presented superior results for signaling dengue transmission risks both geographically and temporally. Since early detection is crucial for planning and deploying effective preventions, MosquiTRAP showed to be a reliable tool and this method provides groundwork for the development of even more precise tools. PMID:22848729
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, J.M.; Panetta, R.L.; Estberg, J.
1993-06-15
A 35-year record of monthly mean zonal wind data for the equatorial stratosphere is represented in terms of a vector (radius and phase angle) in a two-dimensional phase space defined by the normalized expansion coefficients of the two leading empirical orthogonal functions (EOFs) of the vertical structure. The tip of the vector completes one nearly circular loop during each cycle of the quasi-biennial oscillation (QBO). Hence, its position and rate of progress along the orbit of the point provide a measure of the instantaneous amplitude and rate of phase progression of the QBO. Although the phase of the QBO bearsmore » little if any relation to calendar month, the rate of phase progression is strongly modulated by the first and second harmonics of the annual cycle, with a primary maximum in April/May, in agreement with previous studies based on the descent rates of easterly and westerly regimes. A simple linear prediction model is developed for the rate of phase progression, based on the phase of the QBO and the phase of the annual cycle. The model is capable of hindcasting the phase of the QBO to within a specified degree of accuracy approximately 50% longer than a default scheme based on the mean observed rate of phase progression of the QBO (1 cycle per 28.1 months). If the seasonal dependence is ignored, the prediction equation corresponds to the [open quotes]circle map,[close quotes] for which an extensive literature exists in dynamical systems theory. 17 refs., 14 figs., 2 tabs.« less
On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.
Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba
2013-01-01
Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ d , and the dictionary is learned from the training data using the vector space structure of ℝ d and its Euclidean L 2 -metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.
On A Nonlinear Generalization of Sparse Coding and Dictionary Learning
Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba
2013-01-01
Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝd, and the dictionary is learned from the training data using the vector space structure of ℝd and its Euclidean L2-metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis. PMID:24129583
Adaptive Hybrid Picture Coding. Volume 2.
1985-02-01
ooo5 V.a Measurement Vector ..eho..............57 V.b Size Variable o .entroi* Vector .......... .- 59 V * c Shape Vector .Ř 0-60o oe 6 I V~d...the Program for the Adaptive Line of Sight Method .i.. 18.. o ... .... .... 1 B Details of the Feature Vector FormationProgram .. o ...oo..-....- .122 C ...shape recognition is analogous to recognition of curves in space. Therefore, well known concepts and theorems from differential geometry can be 34 . o
Image search engine with selective filtering and feature-element-based classification
NASA Astrophysics Data System (ADS)
Li, Qing; Zhang, Yujin; Dai, Shengyang
2001-12-01
With the growth of Internet and storage capability in recent years, image has become a widespread information format in World Wide Web. However, it has become increasingly harder to search for images of interest, and effective image search engine for the WWW needs to be developed. We propose in this paper a selective filtering process and a novel approach for image classification based on feature element in the image search engine we developed for the WWW. First a selective filtering process is embedded in a general web crawler to filter out the meaningless images with GIF format. Two parameters that can be obtained easily are used in the filtering process. Our classification approach first extract feature elements from images instead of feature vectors. Compared with feature vectors, feature elements can better capture visual meanings of the image according to subjective perception of human beings. Different from traditional image classification method, our classification approach based on feature element doesn't calculate the distance between two vectors in the feature space, while trying to find associations between feature element and class attribute of the image. Experiments are presented to show the efficiency of the proposed approach.
Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T
2016-05-15
Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.
2015-09-28
buoyant underwater vehicle with an interior space in which a length of said underwater vehicle is equal to one tenth of the acoustic wavelength...underwater vehicle with an interior space in which a length of said underwater vehicle is equal to one tenth of the acoustic wavelength; an...unmanned underwater vehicle that can function as an acoustic vector sensor. (2) Description of the Prior Art [0004] It is known that a propagating
Self-organization of meaning and the reflexive communication of information
Leydesdorff, Loet; Petersen, Alexander M.; Ivanova, Inga
2017-01-01
Following a suggestion from Warren Weaver, we extend the Shannon model of communication piecemeal into a complex systems model in which communication is differentiated both vertically and horizontally. This model enables us to bridge the divide between Niklas Luhmann’s theory of the self-organization of meaning in communications and empirical research using information theory. First, we distinguish between communication relations and correlations among patterns of relations. The correlations span a vector space in which relations are positioned and can be provided with meaning. Second, positions provide reflexive perspectives. Whereas the different meanings are integrated locally, each instantiation opens global perspectives – ‘horizons of meaning’ – along eigenvectors of the communication matrix. These next-order codifications of meaning can be expected to generate redundancies when interacting in instantiations. Increases in redundancy indicate new options and can be measured as local reduction of prevailing uncertainty (in bits). The systemic generation of new options can be considered as a hallmark of the knowledge-based economy. PMID:28232771
A geometric approach to problems in birational geometry.
Chi, Chen-Yu; Yau, Shing-Tung
2008-12-02
A classical set of birational invariants of a variety are its spaces of pluricanonical forms and some of their canonically defined subspaces. Each of these vector spaces admits a typical metric structure which is also birationally invariant. These vector spaces so metrized will be referred to as the pseudonormed spaces of the original varieties. A fundamental question is the following: Given two mildly singular projective varieties with some of the first variety's pseudonormed spaces being isometric to the corresponding ones of the second variety's, can one construct a birational map between them that induces these isometries? In this work, a positive answer to this question is given for varieties of general type. This can be thought of as a theorem of Torelli type for birational equivalence.
The next 25 years: Industrialization of space - Rationale for planning
NASA Technical Reports Server (NTRS)
Von Puttkamer, J.
1977-01-01
A methodology for planning the industralization of space is discussed. The suggested approach combines the extrapolative ('push') approach, in which alternative futures are projected on the basis of past and current trends and tendencies, with the normative ('pull') view, in which an ideal state in the far future is postulated and policies and decisions are directed toward its attainment. Time-reversed vectors of the future are tied to extrapolated, trend-oriented vectors of the quasi-present to identify common plateaus or stepping stones in technological development. Important steps in the industrialization of space to attain the short-range goals of production of space-derived energy, goods and services and the long-range goal of space colonization are discussed.
Killing-Yano tensors in spaces admitting a hypersurface orthogonal Killing vector
NASA Astrophysics Data System (ADS)
Garfinkle, David; Glass, E. N.
2013-03-01
Methods are presented for finding Killing-Yano tensors, conformal Killing-Yano tensors, and conformal Killing vectors in spacetimes with a hypersurface orthogonal Killing vector. These methods are similar to a method developed by the authors for finding Killing tensors. In all cases one decomposes both the tensor and the equation it satisfies into pieces along the Killing vector and pieces orthogonal to the Killing vector. Solving the separate equations that result from this decomposition requires less computing than integrating the original equation. In each case, examples are given to illustrate the method.
Embedding of multidimensional time-dependent observations.
Barnard, J P; Aldrich, C; Gerber, M
2001-10-01
A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.
Embedding of multidimensional time-dependent observations
NASA Astrophysics Data System (ADS)
Barnard, Jakobus P.; Aldrich, Chris; Gerber, Marius
2001-10-01
A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.
Foundation Mathematics for the Physical Sciences
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2011-03-01
1. Arithmetic and geometry; 2. Preliminary algebra; 3. Differential calculus; 4. Integral calculus; 5. Complex numbers and hyperbolic functions; 6. Series and limits; 7. Partial differentiation; 8. Multiple integrals; 9. Vector algebra; 10. Matrices and vector spaces; 11. Vector calculus; 12. Line, surface and volume integrals; 13. Laplace transforms; 14. Ordinary differential equations; 15. Elementary probability; Appendices; Index.
Student Solution Manual for Foundation Mathematics for the Physical Sciences
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2011-03-01
1. Arithmetic and geometry; 2. Preliminary algebra; 3. Differential calculus; 4. Integral calculus; 5. Complex numbers and hyperbolic functions; 6. Series and limits; 7. Partial differentiation; 8. Multiple integrals; 9. Vector algebra; 10. Matrices and vector spaces; 11. Vector calculus; 12. Line, surface and volume integrals; 13. Laplace transforms; 14. Ordinary differential equations; 15. Elementary probability; Appendix.
Lorentz symmetric n-particle systems without ``multiple times''
NASA Astrophysics Data System (ADS)
Smith, Felix
2013-05-01
The need for multiple times in relativistic n-particle dynamics is a consequence of Minkowski's postulated symmetry between space and time coordinates in a space-time s = [x1 , . . ,x4 ] = [ x , y , z , ict ] , Eq. (1). Poincaré doubted the need for this space-time symmetry, believing Lorentz covariance could also prevail in some geometries with a three-dimensional position space and a quite different time coordinate. The Hubble expansion observed later justifies a specific geometry of this kind, a negatively curved position 3-space expanding with time at the Hubble rate lH (t) =lH , 0 + cΔt (F. T. Smith, Ann. Fond. L. de Broglie, 30, 179 (2005) and 35, 395 (2010)). Its position 4-vector is not s but q = [x1 , . . ,x4 ] = [ x , y , z , ilH (t) ] , and shows no 4-space symmetry. What is observed is always a difference 4-vector Δq = [ Δx , Δy , Δz , icΔt ] , and this displays the structure of Eq. (1) perfectly. Thus we find the standard 4-vector of special relativity in a geometry that does not require a Minkowski space-time at all, but a quite different geometry with a expanding 3-space symmetry and an independent time. The same Lorentz symmetry with but a single time extends to 2 and n-body systems.
Fast metabolite identification with Input Output Kernel Regression.
Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho
2016-06-15
An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. celine.brouard@aalto.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Fast metabolite identification with Input Output Kernel Regression
Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho
2016-01-01
Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628
Tang, Wing Chun; Tang, Ying Yung; Lam, Carly S Y
2014-01-01
The aim of the study was to evaluate the level of agreement between the 'Representative Value' (RV) of refraction obtained from the Shin-Nippon NVision-K 5001 instrument with values calculated from individual measurement readings using standard algebraic methods. Cycloplegic autorefraction readings for 101 myopic children aged 8-13 years (10.9 ± 1.42 years) were obtained using the Shin-Nippon NVision-K 5001. Ten autorefractor measurements were taken for each eye. The spherical equivalent (SE), sphere (Sph) and cylindrical component (Cyl) power of each eye were calculated, firstly, by averaging the 10 repeated measurements (Mean SE, Mean Sph and Mean Cyl), and secondly, by the vector representation method (Vector SE, Vector Sph and Vector Cyl). These calculated values were then compared with those of RV (RV SE, RV Sph and RV Cyl) provided by the proprietary software of the NVision-K 5001 using one-way analysis of variance (anova). The agreement between the methods was also assessed. The SE of the subjects ranged from -5.37 to -0.62 D (mean ± SD, = -2.89 ± 1.01 D). The Mean SE was in exact agreement with the Vector SE. There were no significant differences between the RV readings and those calculated using non-vectorial or vectorial methods for any of the refractive powers (SE, p = 0.99; Sph, p = 0.93; Cyl, p = 0.24). The (mean ± SD) differences were: RV SE vs Mean SE (and also RV SE vs Vector SE) -0.01 ± 0.06 D; RV Sph vs Mean Sph, -0.01 ± 0.05 D; RV Sph vs Vector Sph, -0.04 ± 0.06 D; RV Cyl vs Mean Cyl, 0.01 ± 0.07 D; RV Cyl vs Vector Cyl, 0.06 ± 0.09 D. Ninety-eight percent of RV reading differed from their non-vectorial or vectorial counterparts by less than 0.25 D. The RV values showed good agreement to the results calculated using conventional methods. Although the formula used to calculate RV by the NVision-K 5001 autorefractor is proprietary, our results provide validation for the use of RV measurements in clinical practice and vision science research. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
A link between torse-forming vector fields and rotational hypersurfaces
NASA Astrophysics Data System (ADS)
Chen, Bang-Yen; Verstraelen, Leopold
Torse-forming vector fields introduced by Yano [On torse forming direction in a Riemannian space, Proc. Imp. Acad. Tokyo 20 (1944) 340-346] are natural extension of concurrent and concircular vector fields. Such vector fields have many nice applications to geometry and mathematical physics. In this paper, we establish a link between rotational hypersurfaces and torse-forming vector fields. More precisely, our main result states that, for a hypersurface M of 𝔼n+1 with n ≥ 3, the tangential component xT of the position vector field of M is a proper torse-forming vector field on M if and only if M is contained in a rotational hypersurface whose axis of rotation contains the origin.
The canonical Lagrangian approach to three-space general relativity
NASA Astrophysics Data System (ADS)
Shyam, Vasudev; Venkatesh, Madhavan
2013-07-01
We study the action for the three-space formalism of general relativity, better known as the Barbour-Foster-Ó Murchadha action, which is a square-root Baierlein-Sharp-Wheeler action. In particular, we explore the (pre)symplectic structure by pulling it back via a Legendre map to the tangent bundle of the configuration space of this action. With it we attain the canonical Lagrangian vector field which generates the gauge transformations (3-diffeomorphisms) and the true physical evolution of the system. This vector field encapsulates all the dynamics of the system. We also discuss briefly the observables and perennials for this theory. We then present a symplectic reduction of the constrained phase space.
NASA Technical Reports Server (NTRS)
Chipman, Russell A.
1996-01-01
This report covers work performed during the period of November 1994 through March 1996 on the design of a Space-borne Solar Vector Magnetograph. This work has been performed as part of a design team under the supervision of Dr. Mona Hagyard and Dr. Alan Gary of the Space Science Laboratory. Many tasks were performed and this report documents the results from some of those tasks, each contained in the corresponding appendix. Appendices are organized in chronological order.
The Absolute Vector Magnetometers on Board Swarm, Lessons Learned From Two Years in Space.
NASA Astrophysics Data System (ADS)
Hulot, G.; Leger, J. M.; Vigneron, P.; Brocco, L.; Olsen, N.; Jager, T.; Bertrand, F.; Fratter, I.; Sirol, O.; Lalanne, X.
2015-12-01
ESA's Swarm satellites carry 4He absolute magnetometers (ASM), designed by CEA-Léti and developed in partnership with CNES. These instruments are the first-ever space-born magnetometers to use a common sensor to simultaneously deliver 1Hz independent absolute scalar and vector readings of the magnetic field. They have provided the very high accuracy scalar field data nominally required by the mission (for both science and calibration purposes, since each satellite also carries a low noise high frequency fluxgate magnetometer designed by DTU), but also very useful experimental absolute vector data. In this presentation, we will report on the status of the instruments, as well as on the various tests and investigations carried out using these experimental data since launch in November 2013. In particular, we will illustrate the advantages of flying ASM instruments on space-born magnetic missions for nominal data quality checks, geomagnetic field modeling and science objectives.
Realistic Covariance Prediction for the Earth Science Constellation
NASA Technical Reports Server (NTRS)
Duncan, Matthew; Long, Anne
2006-01-01
Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.
Tensor manifold-based extreme learning machine for 2.5-D face recognition
NASA Astrophysics Data System (ADS)
Chong, Lee Ying; Ong, Thian Song; Teoh, Andrew Beng Jin
2018-01-01
We explore the use of the Gabor regional covariance matrix (GRCM), a flexible matrix-based descriptor that embeds the Gabor features in the covariance matrix, as a 2.5-D facial descriptor and an effective means of feature fusion for 2.5-D face recognition problems. Despite its promise, matching is not a trivial problem for GRCM since it is a special instance of a symmetric positive definite (SPD) matrix that resides in non-Euclidean space as a tensor manifold. This implies that GRCM is incompatible with the existing vector-based classifiers and distance matchers. Therefore, we bridge the gap of the GRCM and extreme learning machine (ELM), a vector-based classifier for the 2.5-D face recognition problem. We put forward a tensor manifold-compliant ELM and its two variants by embedding the SPD matrix randomly into reproducing kernel Hilbert space (RKHS) via tensor kernel functions. To preserve the pair-wise distance of the embedded data, we orthogonalize the random-embedded SPD matrix. Hence, classification can be done using a simple ridge regressor, an integrated component of ELM, on the random orthogonal RKHS. Experimental results show that our proposed method is able to improve the recognition performance and further enhance the computational efficiency.
NASA Astrophysics Data System (ADS)
Mitri, Farid G.
2018-01-01
Generalized solutions of vector Airy light-sheets, adjustable per their derivative order m, are introduced stemming from the Lorenz gauge condition and Maxwell's equations using the angular spectrum decomposition method. The Cartesian components of the incident radiated electric, magnetic and time-averaged Poynting vector fields in free space (excluding evanescent waves) are determined and computed with particular emphasis on the derivative order of the Airy light-sheet and the polarization on the magnetic vector potential forming the beam. Negative transverse time-averaged Poynting vector components can arise, while the longitudinal counterparts are always positive. Moreover, the analysis is extended to compute the optical radiation force and spin torque vector components on a lossless dielectric prolate subwavelength spheroid in the framework of the electric dipole approximation. The results show that negative forces and spin torques sign reversal arise depending on the derivative order of the beam, the polarization of the magnetic vector potential, and the orientation of the subwavelength prolate spheroid in space. The spin torque sign reversal suggests that counter-clockwise or clockwise rotations around the center of mass of the subwavelength spheroid can occur. The results find useful applications in single Airy light-sheet tweezers, particle manipulation, handling, and rotation applications to name a few examples.
Distance between RBS and AUG plays an important role in overexpression of recombinant proteins.
Berwal, Sunil K; Sreejith, R K; Pal, Jayanta K
2010-10-15
The spacing between ribosome binding site (RBS) and AUG is crucial for efficient overexpression of genes when cloned in prokaryotic expression vectors. We undertook a brief study on the overexpression of genes cloned in Escherichia coli expression vectors, wherein the spacing between the RBS and the start codon was varied. SDS-PAGE and Western blot analysis indicated a high level of protein expression only in constructs where the spacing between RBS and AUG was approximately 40 nucleotides or more, despite the synthesis of the transcripts in the representative cases investigated. Copyright 2010 Elsevier Inc. All rights reserved.
Error assessment of local tie vectors in space geodesy
NASA Astrophysics Data System (ADS)
Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald
2014-05-01
For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.
Estimation of attitude sensor timetag biases
NASA Technical Reports Server (NTRS)
Sedlak, J.
1995-01-01
This paper presents an extended Kalman filter for estimating attitude sensor timing errors. Spacecraft attitude is determined by finding the mean rotation from a set of reference vectors in inertial space to the corresponding observed vectors in the body frame. Any timing errors in the observations can lead to attitude errors if either the spacecraft is rotating or the reference vectors themselves vary with time. The state vector here consists of the attitude quaternion, timetag biases, and, optionally, gyro drift rate biases. The filter models the timetags as random walk processes: their expectation values propagate as constants and white noise contributes to their covariance. Thus, this filter is applicable to cases where the true timing errors are constant or slowly varying. The observability of the state vector is studied first through an examination of the algebraic observability condition and then through several examples with simulated star tracker timing errors. The examples use both simulated and actual flight data from the Extreme Ultraviolet Explorer (EUVE). The flight data come from times when EUVE had a constant rotation rate, while the simulated data feature large angle attitude maneuvers. The tests include cases with timetag errors on one or two sensors, both constant and time-varying, and with and without gyro bias errors. Due to EUVE's sensor geometry, the observability of the state vector is severely limited when the spacecraft rotation rate is constant. In the absence of attitude maneuvers, the state elements are highly correlated, and the state estimate is unreliable. The estimates are particularly sensitive to filter mistuning in this case. The EUVE geometry, though, is a degenerate case having coplanar sensors and rotation vector. Observability is much improved and the filter performs well when the rate is either varying or noncoplanar with the sensors, as during a slew. Even with bad geometry and constant rates, if gyro biases are independently known, the timetag error for a single sensor can be accurately estimated as long as its boresight is not too close to the spacecraft rotation axis.
Algebraic and radical potential fields. Stability domains in coordinate and parametric space
NASA Astrophysics Data System (ADS)
Uteshev, Alexei Yu.
2018-05-01
A dynamical system d X/d t = F(X; A) is treated where F(X; A) is a polynomial (or some general type of radical contained) function in the vectors of state variables X ∈ ℝn and parameters A ∈ ℝm. We are looking for stability domains in both spaces, i.e. (a) domain ℙ ⊂ ℝm such that for any parameter vector specialization A ∈ ℙ, there exists a stable equilibrium for the dynamical system, and (b) domain 𝕊 ⊂ ℝn such that any point X* ∈ 𝕊 could be made a stable equilibrium by a suitable specialization of the parameter vector A.
Liu, Bo; Zhang, Lijia; Xin, Xiangjun
2018-03-19
This paper proposes and demonstrates an enhanced secure 4-D modulation optical generalized filter bank multi-carrier (GFBMC) system based on joint constellation and Stokes vector scrambling. The constellation and Stokes vectors are scrambled by using different scrambling parameters. A multi-scroll Chua's circuit map is adopted as the chaotic model. Large secure key space can be obtained due to the multi-scroll attractors and independent operability of subcarriers. A 40.32Gb/s encrypted optical GFBMC signal with 128 parallel subcarriers is successfully demonstrated in the experiment. The results show good resistance against the illegal receiver and indicate a potential way for the future optical multi-carrier system.
NASA Astrophysics Data System (ADS)
Carrott, Anthony; Siegel, Edward Carl-Ludwig; Hoover, John-Edgar; Ness, Elliott
2013-03-01
Terrorism/Criminalogy//Sociology : non-Linear applied-mathematician (``nose-to-the grindstone / ``gearheadism'') ''modelers'': Worden,, Short, ...criminologists/counter-terrorists/sociologists confront [SIAM Conf. on Nonlinearity, Seattle(12); Canadian Sociology Conf,. Burnaby(12)]. ``The `Sins' of the Fathers Visited Upon the Sons'': Zeno vs Ising vs Heisenberg vs Stoner vs Hubbard vs Siegel ''SODHM''(But NO Y!!!) vs ...??? Magntism and it turn are themselves confronted BY MAGNETISM,via relatively magnetism/metal-insulator conductivity / percolation-phase-transitions critical-phenomena -illiterate non-linear applied-mathematician (nose-to-the-grindstone/ ``gearheadism'')''modelers''. What Secrets Lie Buried in Magnetism?; ``Magnetism Will Conquer the Universe!!!''[Charles Middleton, aka ``His Imperial Majesty The Emperior Ming `The Merciless!!!']'' magnetism-Hamiltonian phase-transitions percolation-``models''!: Zeno(~2350 BCE) to Peter the Pilgrim(1150) to Gilbert(1600) to Faraday(1815-1820) to Tate (1870-1880) to Ewing(1882) hysteresis to Barkhausen(1885) to Curie(1895)-Weiss(1895) to Ising-Lenz(r-space/Localized-Scalar/ Discrete/1911) to Heisenberg(r-space/localized-vector/discrete/1927) to Priesich(1935) to Stoner (electron/k-space/ itinerant-vector/discrete/39) to Stoner-Wohlfarth (technical-magnetism hysteresis /r-space/ itinerant-vector/ discrete/48) to Hubbard-Longuet-Higgins (k-space versus r-space/
1990-10-01
Using the Solar Vector Magnetograph, a solar observation facility at NASA's Marshall Space Flight Center (MSFC), scientists from the National Space Science and Technology Center (NSSTC) in Huntsville, Alabama, are monitoring the explosive potential of magnetic areas of the Sun. This effort could someday lead to better prediction of severe space weather, a phenomenon that occurs when blasts of particles and magnetic fields from the Sun impact the magnetosphere, the magnetic bubble around the Earth. When massive solar explosions, known as coronal mass ejections, blast through the Sun's outer atmosphere and plow toward Earth at speeds of thousands of miles per second, the resulting effects can be harmful to communication satellites and astronauts outside the Earth's magnetosphere. Like severe weather on Earth, severe space weather can be costly. On the ground, the magnetic storm wrought by these solar particles can knock out electric power. The researchers from MSFC and NSSTC's solar physics group develop instruments for measuring magnetic fields on the Sun. With these instruments, the group studies the origin, structure, and evolution of the solar magnetic field and the impact it has on Earth's space environment. This photograph shows the Solar Vector Magnetograph and Dr. Mona Hagyard of MSFC, the director of the observatory who leads the development, operation and research program of the Solar Vector Magnetograph.
The organization of conspecific face space in nonhuman primates
Parr, Lisa A.; Taubert, Jessica; Little, Anthony C.; Hancock, Peter J. B.
2013-01-01
Humans and chimpanzees demonstrate numerous cognitive specializations for processing faces, but comparative studies with monkeys suggest that these may be the result of recent evolutionary adaptations. The present study utilized the novel approach of face space, a powerful theoretical framework used to understand the representation of face identity in humans, to further explore species differences in face processing. According to the theory, faces are represented by vectors in a multidimensional space, the centre of which is defined by an average face. Each dimension codes features important for describing a face’s identity, and vector length codes the feature’s distinctiveness. Chimpanzees and rhesus monkeys discriminated male and female conspecifics’ faces, rated by humans for their distinctiveness, using a computerized task. Multidimensional scaling analyses showed that the organization of face space was similar between humans and chimpanzees. Distinctive faces had the longest vectors and were the easiest for chimpanzees to discriminate. In contrast, distinctiveness did not correlate with the performance of rhesus monkeys. The feature dimensions for each species’ face space were visualized and described using morphing techniques. These results confirm species differences in the perceptual representation of conspecific faces, which are discussed within an evolutionary framework. PMID:22670823
Some Applications Of Semigroups And Computer Algebra In Discrete Structures
NASA Astrophysics Data System (ADS)
Bijev, G.
2009-11-01
An algebraic approach to the pseudoinverse generalization problem in Boolean vector spaces is used. A map (p) is defined, which is similar to an orthogonal projection in linear vector spaces. Some other important maps with properties similar to those of the generalized inverses (pseudoinverses) of linear transformations and matrices corresponding to them are also defined and investigated. Let Ax = b be an equation with matrix A and vectors x and b Boolean. Stochastic experiments for solving the equation, which involves the maps defined and use computer algebra methods, have been made. As a result, the Hamming distance between vectors Ax = p(b) and b is equal or close to the least possible. We also share our experience in using computer algebra systems for teaching discrete mathematics and linear algebra and research. Some examples for computations with binary relations using Maple are given.
Method and system for efficient video compression with low-complexity encoder
NASA Technical Reports Server (NTRS)
Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)
2012-01-01
Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.
HOSVD-Based 3D Active Appearance Model: Segmentation of Lung Fields in CT Images.
Wang, Qingzhu; Kang, Wanjun; Hu, Haihui; Wang, Bin
2016-07-01
An Active Appearance Model (AAM) is a computer vision model which can be used to effectively segment lung fields in CT images. However, the fitting result is often inadequate when the lungs are affected by high-density pathologies. To overcome this problem, we propose a Higher-order Singular Value Decomposition (HOSVD)-based Three-dimensional (3D) AAM. An evaluation was performed on 310 diseased lungs form the Lung Image Database Consortium Image Collection. Other contemporary AAMs operate directly on patterns represented by vectors, i.e., before applying the AAM to a 3D lung volume,it has to be vectorized first into a vector pattern by some technique like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. According to the nature of the 3D lung volume, HOSVD is introduced to represent and process the lung in tensor space. Our method can not only directly operate on the original 3D tensor patterns, but also efficiently reduce the computer memory usage. The evaluation resulted in an average Dice coefficient of 97.0 % ± 0.59 %, a mean absolute surface distance error of 1.0403 ± 0.5716 mm, a mean border positioning errors of 0.9187 ± 0.5381 pixel, and a Hausdorff Distance of 20.4064 ± 4.3855, respectively. Experimental results showed that our methods delivered significant and better segmentation results, compared with the three other model-based lung segmentation approaches, namely 3D Snake, 3D ASM and 3D AAM.
An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics
NASA Astrophysics Data System (ADS)
Turkington, Bruce
2013-08-01
A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.
2012-03-01
observation re = the radius of the Earth at the equator Pn = the Legendre polynomial 26 L = the geocentric latitude, sin The acceleration can then...atmospheric density at an altitude above an %% oblate earth given the position vector in the Geocentric Equatorial %% frame. The position vector is in...Diff between Delta and Geocentric lat rad %% GeoDtLat - Geodetic Latitude -Pi/2 to Pi/2 rad %% GeoCnLat
Pure state consciousness and its local reduction to neuronal space
NASA Astrophysics Data System (ADS)
Duggins, A. J.
2013-01-01
The single neuronal state can be represented as a vector in a complex space, spanned by an orthonormal basis of integer spike counts. In this model a scalar element of experience is associated with the instantaneous firing rate of a single sensory neuron over repeated stimulus presentations. Here the model is extended to composite neural systems that are tensor products of single neuronal vector spaces. Depiction of the mental state as a vector on this tensor product space is intended to capture the unity of consciousness. The density operator is introduced as its local reduction to the single neuron level, from which the firing rate can again be derived as the objective correlate of a subjective element. However, the relational structure of perceptual experience only emerges when the non-local mental state is considered. A metric of phenomenal proximity between neuronal elements of experience is proposed, based on the cross-correlation function of neurophysiology, but constrained by the association of theoretical extremes of correlation/anticorrelation in inseparable 2-neuron states with identical and opponent elements respectively.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1979-01-01
In many pattern recognition problems, data vectors are classified although one or more of the data vector elements are missing. This problem occurs in remote sensing when the ground is obscured by clouds. Optimal linear discrimination procedures for classifying imcomplete data vectors are discussed.
Illustrating dynamical symmetries in classical mechanics: The Laplace-Runge-Lenz vector revisited
NASA Astrophysics Data System (ADS)
O'Connell, Ross C.; Jagannathan, Kannan
2003-03-01
The inverse square force law admits a conserved vector that lies in the plane of motion. This vector has been associated with the names of Laplace, Runge, and Lenz, among others. Many workers have explored aspects of the symmetry and degeneracy associated with this vector and with analogous dynamical symmetries. We define a conserved dynamical variable α that characterizes the orientation of the orbit in two-dimensional configuration space for the Kepler problem and an analogous variable β for the isotropic harmonic oscillator. This orbit orientation variable is canonically conjugate to the angular momentum component normal to the plane of motion. We explore the canonical one-parameter group of transformations generated by α(β). Because we have an obvious pair of conserved canonically conjugate variables, it is desirable to use them as a coordinate-momentum pair. In terms of these phase space coordinates, the form of the Hamiltonian is nearly trivial because neither member of the pair can occur explicitly in the Hamiltonian. From these considerations we gain a simple picture of dynamics in phase space. The procedure we use is in the spirit of the Hamilton-Jacobi method.
Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic
NASA Astrophysics Data System (ADS)
Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat
2017-03-01
The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.
NASA Technical Reports Server (NTRS)
Boda, Wanda; Hargens, Alan R.; Aratow, Michael; Ballard, Richard E.; Hutchinson, Karen; Murthy, Gita; Campbell, James
1994-01-01
The purpose of this study is to compare footward forces, gait kinematics, and muscle activation patterns (EMG) generated during supine treadmill exercise against LBNP with the same parameters during supine bungee resistance exercise and upright treadmill exercise. We hypothesize that the three conditions will be similar. These results will help validate treadmill exercise during LBNP as a viable technique to simulate gravity during space flight. We are evaluating LBNP as a means to load the musculoskeletal and cardiovascular systems without gravity. Such loading should help prevent physiologic deconditioning during space flight. The best ground-based simulation of LBNP treadmill exercise in microgravity is supine LBNP treadmill exercise on Earth because the supine footward force vector is neither directed nor supplemented by Earth's gravity.
Asymptotically Almost Periodic Solutions of Evolution Equations in Banach Spaces
NASA Astrophysics Data System (ADS)
Ruess, W. M.; Phong, V. Q.
Tile linear abstract evolution equation (∗) u'( t) = Au( t) + ƒ( t), t ∈ R, is considered, where A: D( A) ⊂ E → E is the generator of a strongly continuous semigroup of operators in the Banach space E. Starting from analogs of Kadets' and Loomis' Theorems for vector valued almost periodic Functions, we show that if σ( A) ∩ iR is countable and ƒ: R → E is [asymptotically] almost periodic, then every bounded and uniformly continuous solution u to (∗) is [asymptotically] almost periodic, provided e-λ tu( t) has uniformly convergent means for all λ ∈ σ( A) ∩ iR. Related results on Eberlein-weakly asymptotically almost periodic, periodic, asymptotically periodic and C 0-solutions of (∗), as well as on the discrete case of solutions of difference equations are included.
Generalized sidelobe canceller beamforming method for ultrasound imaging.
Wang, Ping; Li, Na; Luo, Han-Wu; Zhu, Yong-Kun; Cui, Shi-Gang
2017-03-01
A modified generalized sidelobe canceller (IGSC) algorithm is proposed to enhance the resolution and robustness against the noise of the traditional generalized sidelobe canceller (GSC) and coherence factor combined method (GSC-CF). In the GSC algorithm, weighting vector is divided into adaptive and non-adaptive parts, while the non-adaptive part does not block all the desired signal. A modified steer vector of the IGSC algorithm is generated by the projection of the non-adaptive vector on the signal space constructed by the covariance matrix of received data. The blocking matrix is generated based on the orthogonal complementary space of the modified steer vector and the weighting vector is updated subsequently. The performance of IGSC was investigated by simulations and experiments. Through simulations, IGSC outperformed GSC-CF in terms of spatial resolution by 0.1 mm regardless there is noise or not, as well as the contrast ratio respect. The proposed IGSC can be further improved by combining with CF. The experimental results also validated the effectiveness of the proposed algorithm with dataset provided by the University of Michigan.
Unitary Operators on the Document Space.
ERIC Educational Resources Information Center
Hoenkamp, Eduard
2003-01-01
Discusses latent semantic indexing (LSI) that would allow search engines to reduce the dimension of the document space by mapping it into a space spanned by conceptual indices. Topics include vector space models; singular value decomposition (SVD); unitary operators; the Haar transform; and new algorithms. (Author/LRW)
Effective traffic features selection algorithm for cyber-attacks samples
NASA Astrophysics Data System (ADS)
Li, Yihong; Liu, Fangzheng; Du, Zhenyu
2018-05-01
By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.
Dynamics of Vortex and Magnetic Lines in Ideal Hydrodynamics and MHD
NASA Astrophysics Data System (ADS)
Kuznetsov, E. A.; Ruban, V. P.
Vortex line and magnetic line representations are introduced for description of flows in ideal hydrodynamics and MHD, respectively. For incompressible fluids it is shown that the equations of motion for vorticity φ and magnetic field with the help of this transformation follow from the variational principle. By means of this representation it is possible to integrate the system of hydrodynamic type with the Hamiltonian H=|φ|dr. It is also demonstrated that these representations allow to remove from the noncanonical Poisson brackets, defined on the space of divergence-free vector fields, degeneracy connected with the vorticity frozenness for the Euler equation and with magnetic field frozenness for ideal MHD. For MHD a new Weber type transformation is found. It is shown how this transformation can be obtained from the two-fluid model when electrons and ions can be considered as two independent fluids. The Weber type transformation for ideal MHD gives the whole Lagrangian vector invariant. When this invariant is absent this transformation coincides with the Clebsch representation analog introduced in [1].
NASA Technical Reports Server (NTRS)
Williams, D. H.
1983-01-01
A simulation study was undertaken to evaluate two time-based self-spacing techniques for in-trail following during terminal area approach. An electronic traffic display was provided in the weather radarscope location. The displayed self-spacing cues allowed the simulated aircraft to follow and to maintain spacing on another aircraft which was being vectored by air traffic control (ATC) for landing in a high-density terminal area. Separation performance data indicate the information provided on the traffic display was adequate for the test subjects to accurately follow the approach path of another aircraft without the assistance of ATC. The time-based technique with a constant-delay spacing criterion produced the most satisfactory spacing performance. Pilot comments indicate the workload associated with the self-separation task was very high and that additional spacing command information and/or aircraft autopilot functions would be desirable for operational implementational of the self-spacing task.
Hypercyclic subspaces for Frechet space operators
NASA Astrophysics Data System (ADS)
Petersson, Henrik
2006-07-01
A continuous linear operator is hypercyclic if there is an such that the orbit {Tnx} is dense, and such a vector x is said to be hypercyclic for T. Recent progress show that it is possible to characterize Banach space operators that have a hypercyclic subspace, i.e., an infinite dimensional closed subspace of, except for zero, hypercyclic vectors. The following is known to hold: A Banach space operator T has a hypercyclic subspace if there is a sequence (ni) and an infinite dimensional closed subspace such that T is hereditarily hypercyclic for (ni) and Tni->0 pointwise on E. In this note we extend this result to the setting of Frechet spaces that admit a continuous norm, and study some applications for important function spaces. As an application we also prove that any infinite dimensional separable Frechet space with a continuous norm admits an operator with a hypercyclic subspace.
A space-efficient quantum computer simulator suitable for high-speed FPGA implementation
NASA Astrophysics Data System (ADS)
Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel
2009-05-01
Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.
Fault Diagnosis for Rotating Machinery: A Method based on Image Processing
Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie
2016-01-01
Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246
Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.
Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie
2016-01-01
Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.
NASA Astrophysics Data System (ADS)
Hey, J. D.
2015-09-01
On the basis of the original definition and analysis of the vector operator by Pauli (1926 Z. Phys. 36 336-63), and further developments by Flamand (1966 J. Math. Phys. 7 1924-31), and by Becker and Bleuler (1976 Z. Naturforsch. 31a 517-23), we consider the action of the operator on both spherical polar and parabolic basis state wave functions, both with and without direct use of Pauli’s identity (Valent 2003 Am. J. Phys. 71 171-75). Comparison of the results, with the aid of two earlier papers (Hey 2006 J. Phys. B: At. Mol. Opt. Phys. 39 2641-64, Hey 2007 J. Phys. B: At. Mol. Opt. Phys. 40 4077-96), yields a convenient ladder technique in the form of a recurrence relation for calculating the transformation coefficients between the two sets of basis states, without explicit use of generalized hypergeometric functions. This result is therefore very useful for application to Stark effect and impact broadening calculations applied to high-n radio recombination lines from tenuous space plasmas. We also demonstrate the versatility of the Runge-Lenz-Pauli vector operator as a means of obtaining recurrence relations between expectation values of successive powers of quantum mechanical operators, by using it to provide, as an example, a derivation of the Kramers-Pasternack relation. It is suggested that this operator, whose potential use in Stark- and Zeeman-effect calculations for magnetically confined fusion edge plasmas (Rosato, Marandet and Stamm 2014 J. Phys. B: At. Mol. Opt. Phys. 47 105702) and tenuous space plasmas ( H II regions) has not been fully explored and exploited, may yet be found to yield a number of valuable results for applications to plasma diagnostic techniques based upon rate calculations of atomic processes.
NASA Astrophysics Data System (ADS)
Bonomini, Maria Paula; Juan Ingallina, Fernando; Barone, Valeria; Antonucci, Ricardo; Valentinuzzi, Max; Arini, Pedro David
2016-04-01
The changes that left ventricular hypertrophy (LVH) induces in depolarization and repolarization vectors are well known. We analyzed the performance of the electrocardiographic and vectorcardiographic transverse planes (TP in the ECG and XZ in the VCG) and frontal planes (FP in the ECG and XY in the VCG) to discriminate LVH patients from control subjects. In an age-balanced set of 58 patients, the directions and amplitudes of QRS-complexes and T-wave vectors were studied. The repolarization vector significantly decreased in modulus from controls to LVH in the transverse plane (TP: 0.45±0.17mV vs. 0.24±0.13mV, p<0.0005 XZ: 0.43±0.16mV vs. 0.26±0.11mV, p<0.005) while the depolarization vector significantly changed in angle in the electrocardiographic frontal plane (Controls vs. LVH, FP: 48.24±33.66° vs. 46.84±35.44°, p<0.005, XY: 20.28±35.20° vs. 19.35±12.31°, NS). Several LVH indexes were proposed combining such information in both ECG and VCG spaces. A subset of all those indexes with AUC values greater than 0.7 was further studied. This subset comprised four indexes, with three of them belonging to the ECG space. Two out of the four indexes presented the best ROC curves (AUC values: 0.78 and 0.75, respectively). One index belonged to the ECG space and the other one to the VCG space. Both indexes showed a sensitivity of 86% and a specificity of 70%. In conclusion, the proposed indexes can favorably complement LVH diagnosis
Covariantized vector Galileons
NASA Astrophysics Data System (ADS)
Hull, Matthew; Koyama, Kazuya; Tasinato, Gianmassimo
2016-03-01
Vector Galileons are ghost-free systems containing higher derivative interactions of vector fields. They break the vector gauge symmetry, and the dynamics of the longitudinal vector polarizations acquire a Galileon symmetry in an appropriate decoupling limit in Minkowski space. Using an Arnowitt-Deser-Misner approach, we carefully reconsider the coupling with gravity of vector Galileons, with the aim of studying the necessary conditions to avoid the propagation of ghosts. We develop arguments that put on a more solid footing the results previously obtained in the literature. Moreover, working in analogy with the scalar counterpart, we find indications for the existence of a "beyond Horndeski" theory involving vector degrees of freedom that avoids the propagation of ghosts thanks to secondary constraints. In addition, we analyze a Higgs mechanism for generating vector Galileons through spontaneous symmetry breaking, and we present its consistent covariantization.
NASA Astrophysics Data System (ADS)
Li, Hui; Hong, Lu-Yao; Zhou, Qing; Yu, Hai-Jie
2015-08-01
The business failure of numerous companies results in financial crises. The high social costs associated with such crises have made people to search for effective tools for business risk prediction, among which, support vector machine is very effective. Several modelling means, including single-technique modelling, hybrid modelling, and ensemble modelling, have been suggested in forecasting business risk with support vector machine. However, existing literature seldom focuses on the general modelling frame for business risk prediction, and seldom investigates performance differences among different modelling means. We reviewed researches on forecasting business risk with support vector machine, proposed the general assisted prediction modelling frame with hybridisation and ensemble (APMF-WHAE), and finally, investigated the use of principal components analysis, support vector machine, random sampling, and group decision, under the general frame in forecasting business risk. Under the APMF-WHAE frame with support vector machine as the base predictive model, four specific predictive models were produced, namely, pure support vector machine, a hybrid support vector machine involved with principal components analysis, a support vector machine ensemble involved with random sampling and group decision, and an ensemble of hybrid support vector machine using group decision to integrate various hybrid support vector machines on variables produced from principle components analysis and samples from random sampling. The experimental results indicate that hybrid support vector machine and ensemble of hybrid support vector machines were able to produce dominating performance than pure support vector machine and support vector machine ensemble.
Closedness of orbits in a space with SU(2) Poisson structure
NASA Astrophysics Data System (ADS)
Fatollahi, Amir H.; Shariati, Ahmad; Khorrami, Mohammad
2014-06-01
The closedness of orbits of central forces is addressed in a three-dimensional space in which the Poisson bracket among the coordinates is that of the SU(2) Lie algebra. In particular it is shown that among problems with spherically symmetric potential energies, it is only the Kepler problem for which all bounded orbits are closed. In analogy with the case of the ordinary space, a conserved vector (apart from the angular momentum) is explicitly constructed, which is responsible for the orbits being closed. This is the analog of the Laplace-Runge-Lenz vector. The algebra of the constants of the motion is also worked out.
Associated patterns of insecticide resistance in field populations of malaria vectors across Africa.
Hancock, Penelope A; Wiebe, Antoinette; Gleave, Katherine A; Bhatt, Samir; Cameron, Ewan; Trett, Anna; Weetman, David; Smith, David L; Hemingway, Janet; Coleman, Michael; Gething, Peter W; Moyes, Catherine L
2018-06-05
The development of insecticide resistance in African malaria vectors threatens the continued efficacy of important vector control methods that rely on a limited set of insecticides. To understand the operational significance of resistance we require quantitative information about levels of resistance in field populations to the suite of vector control insecticides. Estimation of resistance is complicated by the sparsity of observations in field populations, variation in resistance over time and space at local and regional scales, and cross-resistance between different insecticide types. Using observations of the prevalence of resistance in mosquito species from the Anopheles gambiae complex sampled from 1,183 locations throughout Africa, we applied Bayesian geostatistical models to quantify patterns of covariation in resistance phenotypes across different insecticides. For resistance to the three pyrethroids tested, deltamethrin, permethrin, and λ-cyhalothrin, we found consistent forms of covariation across sub-Saharan Africa and covariation between resistance to these pyrethroids and resistance to DDT. We found no evidence of resistance interactions between carbamate and organophosphate insecticides or between these insecticides and those from other classes. For pyrethroids and DDT we found significant associations between predicted mean resistance and the observed frequency of kdr mutations in the Vgsc gene in field mosquito samples, with DDT showing the strongest association. These results improve our capacity to understand and predict resistance patterns throughout Africa and can guide the development of monitoring strategies. Copyright © 2018 the Author(s). Published by PNAS.
NASA Astrophysics Data System (ADS)
Pazó, Diego; Rodríguez, Miguel A.; López, Juan M.
2010-05-01
We study the evolution of finite perturbations in the Lorenz ‘96 model, a meteorological toy model of the atmosphere. The initial perturbations are chosen to be aligned along different dynamic vectors: bred, Lyapunov, and singular vectors. Using a particular vector determines not only the amplification rate of the perturbation but also the spatial structure of the perturbation and its stability under the evolution of the flow. The evolution of perturbations is systematically studied by means of the so-called mean-variance of logarithms diagram that provides in a very compact way the basic information to analyse the spatial structure. We discuss the corresponding advantages of using those different vectors for preparing initial perturbations to be used in ensemble prediction systems, focusing on key properties: dynamic adaptation to the flow, robustness, equivalence between members of the ensemble, etc. Among all the vectors considered here, the so-called characteristic Lyapunov vectors are possibly optimal, in the sense that they are both perfectly adapted to the flow and extremely robust.
NASA Astrophysics Data System (ADS)
Pazó, Diego; Rodríguez, Miguel A.; López, Juan M.
2010-01-01
We study the evolution of finite perturbations in the Lorenz `96 model, a meteorological toy model of the atmosphere. The initial perturbations are chosen to be aligned along different dynamic vectors: bred, Lyapunov, and singular vectors. Using a particular vector determines not only the amplification rate of the perturbation but also the spatial structure of the perturbation and its stability under the evolution of the flow. The evolution of perturbations is systematically studied by means of the so-called mean-variance of logarithms diagram that provides in a very compact way the basic information to analyse the spatial structure. We discuss the corresponding advantages of using those different vectors for preparing initial perturbations to be used in ensemble prediction systems, focusing on key properties: dynamic adaptation to the flow, robustness, equivalence between members of the ensemble, etc. Among all the vectors considered here, the so-called characteristic Lyapunov vectors are possibly optimal, in the sense that they are both perfectly adapted to the flow and extremely robust.
van Herpen, Gerard
2014-01-01
Einthoven not only designed a high quality instrument, the string galvanometer, for recording the ECG, he also shaped the conceptual framework to understand it. He reduced the body to an equilateral triangle and the cardiac electric activity to a dipole, represented by an arrow (i.e. a vector) in the triangle's center. Up to the present day the interpretation of the ECG is based on the model of a dipole vector being projected on the various leads. The model is practical but intuitive, not physically founded. Burger analysed the relation between heart vector and leads according to the principles of physics. It then follows that an ECG lead must be treated as a vector (lead vector) and that the lead voltage is not simply proportional to the projection of the vector on the lead, but must be multiplied by the value (length) of the lead vector, the lead strength. Anatomical lead axis and electrical lead axis are different entities and the anatomical body space must be distinguished from electrical space. Appreciation of these underlying physical principles should contribute to a better understanding of the ECG. The development of these principles by Burger is described, together with some personal notes and a sketch of the personality of this pioneer of medical physics. Copyright © 2014. Published by Elsevier Inc.
Vector representation of lithium and other mica compositions
NASA Technical Reports Server (NTRS)
Burt, Donald M.
1991-01-01
In contrast to mathematics, where a vector of one component defines a line, in chemical petrology a one-component system is a point, and two components are needed to define a line, three for a plane, and four for a space. Here, an attempt is made to show how these differences in the definition of a component can be resolved, with lithium micas used as an example. In particular, the condensed composition space theoretically accessible to Li-Fe-Al micas is shown to be an irregular three-dimensional polyhedron, rather than the triangle Al(3+)-Fe(2+)-Li(+), used by some researchers. This result is demonstrated starting with the annite composition and using exchange operators graphically as vectors that generate all of the other mica compositions.
Covariance estimation in Terms of Stokes Parameters with Application to Vector Sensor Imaging
2016-12-15
S. Klein, “HF Vector Sensor for Radio Astronomy : Ground Testing Results,” in AIAA SPACE 2016, ser. AIAA SPACE Forum, American Institute of... astronomy ,” in 2016 IEEE Aerospace Conference, Mar. 2016, pp. 1–17. doi: 10.1109/ AERO.2016.7500688. [4] K.-C. Ho, K.-C. Tan, and B. T. G. Tan, “Estimation of...Statistical Imaging in Radio Astronomy via an Expectation-Maximization Algorithm for Structured Covariance Estimation,” in Statistical Methods in Imaging: IN
Lie theory and control systems defined on spheres
NASA Technical Reports Server (NTRS)
Brockett, R. W.
1972-01-01
It is shown that in constructing a theory for the most elementary class of control problems defined on spheres, some results from the Lie theory play a natural role. To understand controllability, optimal control, and certain properties of stochastic equations, Lie theoretic ideas are needed. The framework considered here is the most natural departure from the usual linear system/vector space problems which have dominated control systems literature. For this reason results are compared with those previously available for the finite dimensional vector space case.
Space Object Classification Using Fused Features of Time Series Data
NASA Astrophysics Data System (ADS)
Jia, B.; Pham, K. D.; Blasch, E.; Shen, D.; Wang, Z.; Chen, G.
In this paper, a fused feature vector consisting of raw time series and texture feature information is proposed for space object classification. The time series data includes historical orbit trajectories and asteroid light curves. The texture feature is derived from recurrence plots using Gabor filters for both unsupervised learning and supervised learning algorithms. The simulation results show that the classification algorithms using the fused feature vector achieve better performance than those using raw time series or texture features only.
Zhang, Lijia; Liu, Bo; Xin, Xiangjun
2015-06-15
A secure enhanced coherent optical multi-carrier system based on Stokes vector scrambling is proposed and experimentally demonstrated. The optical signal with four-dimensional (4D) modulation space has been scrambled intra- and inter-subcarriers, where a multi-layer logistic map is adopted as the chaotic model. An experiment with 61.71-Gb/s encrypted multi-carrier signal is successfully demonstrated with the proposed method. The results indicate a promising solution for the physical secure optical communication.
Using trees to compute approximate solutions to ordinary differential equations exactly
NASA Technical Reports Server (NTRS)
Grossman, Robert
1991-01-01
Some recent work is reviewed which relates families of trees to symbolic algorithms for the exact computation of series which approximate solutions of ordinary differential equations. It turns out that the vector space whose basis is the set of finite, rooted trees carries a natural multiplication related to the composition of differential operators, making the space of trees an algebra. This algebraic structure can be exploited to yield a variety of algorithms for manipulating vector fields and the series and algebras they generate.
NASA Astrophysics Data System (ADS)
Hoover, Wm. G.; Hoover, Carol G.
2012-02-01
We compare the Gram-Schmidt and covariant phase-space-basis-vector descriptions for three time-reversible harmonic oscillator problems, in two, three, and four phase-space dimensions respectively. The two-dimensional problem can be solved analytically. The three-dimensional and four-dimensional problems studied here are simultaneously chaotic, time-reversible, and dissipative. Our treatment is intended to be pedagogical, for use in an updated version of our book on Time Reversibility, Computer Simulation, and Chaos. Comments are very welcome.
Sensitivity analysis of the space shuttle to ascent wind profiles
NASA Technical Reports Server (NTRS)
Smith, O. E.; Austin, L. D., Jr.
1982-01-01
A parametric sensitivity analysis of the space shuttle ascent flight to the wind profile is presented. Engineering systems parameters are obtained by flight simulations using wind profile models and samples of detailed (Jimsphere) wind profile measurements. The wind models used are the synthetic vector wind model, with and without the design gust, and a model of the vector wind change with respect to time. From these comparison analyses an insight is gained on the contribution of winds to ascent subsystems flight parameters.
Simple satellite orbit propagator
NASA Astrophysics Data System (ADS)
Gurfil, P.
2008-06-01
An increasing number of space missions require on-board autonomous orbit determination. The purpose of this paper is to develop a simple orbit propagator (SOP) for such missions. Since most satellites are limited by the available processing power, it is important to develop an orbit propagator that will use limited computational and memory resources. In this work, we show how to choose state variables for propagation using the simplest numerical integration scheme available-the explicit Euler integrator. The new state variables are derived by the following rationale: Apply a variation-of-parameters not on the gravity-affected orbit, but rather on the gravity-free orbit, and teart the gravity as a generalized force. This ultimately leads to a state vector comprising the inertial velocity and a modified position vector, wherein the product of velocity and time is subtracted from the inertial position. It is shown that the explicit Euler integrator, applied on the new state variables, becomes a symplectic integrator, preserving the Hamiltonian and the angular momentum (or a component thereof in the case of oblateness perturbations). The main application of the proposed propagator is estimation of mean orbital elements. It is shown that the SOP is capable of estimating the mean elements with an accuracy that is comparable to a high-order integrator that consumes an order-of-magnitude more computational time than the SOP.
Permutation modulation for quantization and information reconciliation in CV-QKD systems
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
2017-08-01
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal to Noise Ratio (SNR) exasperating the problem. Here we propose to use Permutation Modulation (PM) as a means of quantization of Gaussian vectors at Alice and Bob over a d-dimensional space with d ≫ 1. The goal is to achieve the necessary coding efficiency to extend the achievable range of continuous variable QKD by quantizing over larger and larger dimensions. Fractional bit rate per sample is easily achieved using PM at very reasonable computational cost. Ordered statistics is used extensively throughout the development from generation of the seed vector in PM to analysis of error rates associated with the signs of the Gaussian samples at Alice and Bob as a function of the magnitude of the observed samples at Bob.
NASA Astrophysics Data System (ADS)
Hall, Justin R.; Hastrup, Rolf C.
The United States Space Exploration Initiative (SEI) calls for the charting of a new and evolving manned course to the Moon, Mars, and beyond. This paper discusses key challenges in providing effective deep space telecommunications, navigation, and information management (TNIM) architectures and designs for Mars exploration support. The fundamental objectives are to provide the mission with means to monitor and control mission elements, acquire engineering, science, and navigation data, compute state vectors and navigate, and move these data efficiently and automatically between mission nodes for timely analysis and decision-making. Although these objectives do not depart, fundamentally, from those evolved over the past 30 years in supporting deep space robotic exploration, there are several new issues. This paper focuses on summarizing new requirements, identifying related issues and challenges, responding with concepts and strategies which are enabling, and, finally, describing candidate architectures, and driving technologies. The design challenges include the attainment of: 1) manageable interfaces in a large distributed system, 2) highly unattended operations for in-situ Mars telecommunications and navigation functions, 3) robust connectivity for manned and robotic links, 4) information management for efficient and reliable interchange of data between mission nodes, and 5) an adequate Mars-Earth data rate.
Space-Time Modelling of Groundwater Level Using Spartan Covariance Function
NASA Astrophysics Data System (ADS)
Varouchakis, Emmanouil; Hristopulos, Dionissios
2014-05-01
Geostatistical models often need to handle variables that change in space and in time, such as the groundwater level of aquifers. A major advantage of space-time observations is that a higher number of data supports parameter estimation and prediction. In a statistical context, space-time data can be considered as realizations of random fields that are spatially extended and evolve in time. The combination of spatial and temporal measurements in sparsely monitored watersheds can provide very useful information by incorporating spatiotemporal correlations. Spatiotemporal interpolation is usually performed by applying the standard Kriging algorithms extended in a space-time framework. Spatiotemoral covariance functions for groundwater level modelling, however, have not been widely developed. We present a new non-separable theoretical spatiotemporal variogram function which is based on the Spartan covariance family and evaluate its performance in spatiotemporal Kriging (STRK) interpolation. The original spatial expression (Hristopulos and Elogne 2007) that has been successfully used for the spatial interpolation of groundwater level (Varouchakis and Hristopulos 2013) is modified by defining the following space-time normalized distance h = °h2r-+-α h2τ, hr=r- ξr, hτ=τ- ξτ; where r is the spatial lag vector, τ the temporal lag vector, ξr is the correlation length in position space (r) and ξτ in time (τ), h the normalized space-time lag vector, h = |h| is its Euclidean norm of the normalized space-time lag and α the coefficient that determines the relative weight of the time lag. The space-time experimental semivariogram is determined from the biannual (wet and dry period) time series of groundwater level residuals (obtained from the original series after trend removal) between the years 1981 and 2003 at ten sampling stations located in the Mires hydrological basin in the island of Crete (Greece). After the hydrological year 2002-2003 there is a significant groundwater level increase during the wet period of 2003-2004 and a considerable drop during the dry period of 2005-2006. Both periods are associated with significant annual changes in the precipitation compared to the basin average, i.e., a 40% increase and 65% decrease, respectively. We use STRK to 'predict' the groundwater level for the two selected hydrological periods (wet period of 2003-2004 and dry period of 2005-2006) at each sampling station. The predictions are validated using the respective measured values. The novel Spartan spatiotemporal covariance function gives a mean absolute relative prediction error of 12%. This is 45% lower than the respective value obtained with the commonly used product-sum covariance function, and 31% lower than the respective value obtained with a non-separable function based on the diffusion equation (Kolovos et al. 2010). The advantage of the Spartan space-time covariance model is confirmed with statistical measures such as the root mean square standardized error (RMSSE), the modified coefficient of model efficiency, E' (Legates and McCabe, 1999) and the modified Index of Agreement, IoA'(Janssen and Heuberger, 1995). Hristopulos, D. T. and Elogne, S. N. 2007. Analytic properties and covariance functions for a new class of generalized Gibbs random fields. IEEE Transactions on Information Theory, 53, 4667-4467. Janssen, P.H.M. and Heuberger P.S.C. 1995. Calibration of process-oriented models. Ecological Modelling, 83, 55-66. Kolovos, A., Christakos, G., Hristopulos, D. T. and Serre, M. L. 2004. Methods for generating non-separable spatiotemporal covariance models with potential environmental applications. Advances in Water Resources, 27 (8), 815-830. Legates, D.R. and McCabe Jr., G.J. 1999. Evaluating the use of 'goodness-of-fit' measures in hydrologic and hydro climatic model validation. Water Resources Research, 35, 233-241. Varouchakis, E. A. and Hristopulos, D. T. 2013. Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables. Advances in Water Resources, 52, 34-49.
A new method to cluster genomes based on cumulative Fourier power spectrum.
Dong, Rui; Zhu, Ziyue; Yin, Changchuan; He, Rong L; Yau, Stephen S-T
2018-06-20
Analyzing phylogenetic relationships using mathematical methods has always been of importance in bioinformatics. Quantitative research may interpret the raw biological data in a precise way. Multiple Sequence Alignment (MSA) is used frequently to analyze biological evolutions, but is very time-consuming. When the scale of data is large, alignment methods cannot finish calculation in reasonable time. Therefore, we present a new method using moments of cumulative Fourier power spectrum in clustering the DNA sequences. Each sequence is translated into a vector in Euclidean space. Distances between the vectors can reflect the relationships between sequences. The mapping between the spectra and moment vector is one-to-one, which means that no information is lost in the power spectra during the calculation. We cluster and classify several datasets including Influenza A, primates, and human rhinovirus (HRV) datasets to build up the phylogenetic trees. Results show that the new proposed cumulative Fourier power spectrum is much faster and more accurately than MSA and another alignment-free method known as k-mer. The research provides us new insights in the study of phylogeny, evolution, and efficient DNA comparison algorithms for large genomes. The computer programs of the cumulative Fourier power spectrum are available at GitHub (https://github.com/YaulabTsinghua/cumulative-Fourier-power-spectrum). Copyright © 2018. Published by Elsevier B.V.
Recent Developments In Theory Of Balanced Linear Systems
NASA Technical Reports Server (NTRS)
Gawronski, Wodek
1994-01-01
Report presents theoretical study of some issues of controllability and observability of system represented by linear, time-invariant mathematical model of the form. x = Ax + Bu, y = Cx + Du, x(0) = xo where x is n-dimensional vector representing state of system; u is p-dimensional vector representing control input to system; y is q-dimensional vector representing output of system; n,p, and q are integers; x(0) is intial (zero-time) state vector; and set of matrices (A,B,C,D) said to constitute state-space representation of system.
A Code Generation Approach for Auto-Vectorization in the Spade Compiler
NASA Astrophysics Data System (ADS)
Wang, Huayong; Andrade, Henrique; Gedik, Buğra; Wu, Kun-Lung
We describe an auto-vectorization approach for the Spade stream processing programming language, comprising two ideas. First, we provide support for vectors as a primitive data type. Second, we provide a C++ library with architecture-specific implementations of a large number of pre-vectorized operations as the means to support language extensions. We evaluate our approach with several stream processing operators, contrasting Spade's auto-vectorization with the native auto-vectorization provided by the GNU gcc and Intel icc compilers.
Searching for transcription factor binding sites in vector spaces
2012-01-01
Background Computational approaches to transcription factor binding site identification have been actively researched in the past decade. Learning from known binding sites, new binding sites of a transcription factor in unannotated sequences can be identified. A number of search methods have been introduced over the years. However, one can rarely find one single method that performs the best on all the transcription factors. Instead, to identify the best method for a particular transcription factor, one usually has to compare a handful of methods. Hence, it is highly desirable for a method to perform automatic optimization for individual transcription factors. Results We proposed to search for transcription factor binding sites in vector spaces. This framework allows us to identify the best method for each individual transcription factor. We further introduced two novel methods, the negative-to-positive vector (NPV) and optimal discriminating vector (ODV) methods, to construct query vectors to search for binding sites in vector spaces. Extensive cross-validation experiments showed that the proposed methods significantly outperformed the ungapped likelihood under positional background method, a state-of-the-art method, and the widely-used position-specific scoring matrix method. We further demonstrated that motif subtypes of a TF can be readily identified in this framework and two variants called the k NPV and k ODV methods benefited significantly from motif subtype identification. Finally, independent validation on ChIP-seq data showed that the ODV and NPV methods significantly outperformed the other compared methods. Conclusions We conclude that the proposed framework is highly flexible. It enables the two novel methods to automatically identify a TF-specific subspace to search for binding sites. Implementations are available as source code at: http://biogrid.engr.uconn.edu/tfbs_search/. PMID:23244338
NASA Astrophysics Data System (ADS)
Padhee, Varsha
Common Mode Voltage (CMV) in any power converter has been the major contributor to premature motor failures, bearing deterioration, shaft voltage build up and electromagnetic interference. Intelligent control methods like Space Vector Pulse Width Modulation (SVPWM) techniques provide immense potential and flexibility to reduce CMV, thereby targeting all the afore mentioned problems. Other solutions like passive filters, shielded cables and EMI filters add to the volume and cost metrics of the entire system. Smart SVPWM techniques therefore, come with a very important advantage of being an economical solution. This thesis discusses a modified space vector technique applied to an Indirect Matrix Converter (IMC) which results in the reduction of common mode voltages and other advanced features. The conventional indirect space vector pulse-width modulation (SVPWM) method of controlling matrix converters involves the usage of two adjacent active vectors and one zero vector for both rectifying and inverting stages of the converter. By suitable selection of space vectors, the rectifying stage of the matrix converter can generate different levels of virtual DC-link voltage. This capability can be exploited for operation of the converter in different ranges of modulation indices for varying machine speeds. This results in lower common mode voltage and improves the harmonic spectrum of the output voltage, without increasing the number of switching transitions as compared to conventional modulation. To summarize it can be said that the responsibility of formulating output voltages with a particular magnitude and frequency has been transferred solely to the rectifying stage of the IMC. Estimation of degree of distortion in the three phase output voltage is another facet discussed in this thesis. An understanding of the SVPWM technique and the switching sequence of the space vectors in detail gives the potential to estimate the RMS value of the switched output voltage of any converter. This conceivably aids the sizing and design of output passive filters. An analytical estimation method has been presented to achieve this purpose for am IMC. Knowledge of the fundamental component in output voltage can be utilized to calculate its Total Harmonic Distortion (THD). The effectiveness of the proposed SVPWM algorithms and the analytical estimation technique is substantiated by simulations in MATLAB / Simulink and experiments on a laboratory prototype of the IMC. Proper comparison plots have been provided to contrast the performance of the proposed methods with the conventional SVPWM method. The behavior of output voltage distortion and CMV with variation in operating parameters like modulation index and output frequency has also been analyzed.
Viral Vectors for Use in the Development of Biodefense Vaccines
2005-06-17
vaccinia virus, and Venezuelan equine encephalitis virus, as vaccine vectors has enabled researchers to develop effective means for countering the...biowarfare. The use of viruses, for example adenovirus, vaccinia virus, and Venezuelan equine encephalitis virus, as vaccine -vectors has enabled researchers to... vaccines . . . . . . . . . . . . . . . . . . . 1298 2.1.3. Vaccinia virus-vectored Venezuelan equine encephalitis vaccines
A Bag of Concepts Approach for Biomedical Document Classification Using Wikipedia Knowledge.
Mouriño-García, Marcos A; Pérez-Rodríguez, Roberto; Anido-Rifón, Luis E
2017-01-01
The ability to efficiently review the existing literature is essential for the rapid progress of research. This paper describes a classifier of text documents, represented as vectors in spaces of Wikipedia concepts, and analyses its suitability for classification of Spanish biomedical documents when only English documents are available for training. We propose the cross-language concept matching (CLCM) technique, which relies on Wikipedia interlanguage links to convert concept vectors from the Spanish to the English space. The performance of the classifier is compared to several baselines: a classifier based on machine translation, a classifier that represents documents after performing Explicit Semantic Analysis (ESA), and a classifier that uses a domain-specific semantic an- notator (MetaMap). The corpus used for the experiments (Cross-Language UVigoMED) was purpose-built for this study, and it is composed of 12,832 English and 2,184 Spanish MEDLINE abstracts. The performance of our approach is superior to any other state-of-the art classifier in the benchmark, with performance increases up to: 124% over classical machine translation, 332% over MetaMap, and 60 times over the classifier based on ESA. The results have statistical significance, showing p-values < 0.0001. Using knowledge mined from Wikipedia to represent documents as vectors in a space of Wikipedia concepts and translating vectors between language-specific concept spaces, a cross-language classifier can be built, and it performs better than several state-of-the-art classifiers. Schattauer GmbH.
Mouriño-García, Marcos A; Pérez-Rodríguez, Roberto; Anido-Rifón, Luis E
2017-10-26
The ability to efficiently review the existing literature is essential for the rapid progress of research. This paper describes a classifier of text documents, represented as vectors in spaces of Wikipedia concepts, and analyses its suitability for classification of Spanish biomedical documents when only English documents are available for training. We propose the cross-language concept matching (CLCM) technique, which relies on Wikipedia interlanguage links to convert concept vectors from the Spanish to the English space. The performance of the classifier is compared to several baselines: a classifier based on machine translation, a classifier that represents documents after performing Explicit Semantic Analysis (ESA), and a classifier that uses a domain-specific semantic annotator (MetaMap). The corpus used for the experiments (Cross-Language UVigoMED) was purpose-built for this study, and it is composed of 12,832 English and 2,184 Spanish MEDLINE abstracts. The performance of our approach is superior to any other state-of-the art classifier in the benchmark, with performance increases up to: 124% over classical machine translation, 332% over MetaMap, and 60 times over the classifier based on ESA. The results have statistical significance, showing p-values < 0.0001. Using knowledge mined from Wikipedia to represent documents as vectors in a space of Wikipedia concepts and translating vectors between language-specific concept spaces, a cross-language classifier can be built, and it performs better than several state-of-the-art classifiers.
NASA Technical Reports Server (NTRS)
Millard, Jon
2014-01-01
The European Space Agency (ESA) has entered into a partnership with the National Aeronautics and Space Administration (NASA) to develop and provide the Service Module (SM) for the Orion Multipurpose Crew Vehicle (MPCV) Program. The European Service Module (ESM) will provide main engine thrust by utilizing the Space Shuttle Program Orbital Maneuvering System Engine (OMS-E). Thrust Vector Control (TVC) of the OMS-E will be provided by the Orbital Maneuvering System (OMS) TVC, also used during the Space Shuttle Program. NASA will be providing the OMS-E and OMS TVC to ESA as Government Furnished Equipment (GFE) to integrate into the ESM. This presentation will describe the OMS-E and OMS TVC and discuss the implementation of the hardware for the ESM.
Modal vector estimation for closely spaced frequency modes
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Chung, Y. T.; Blair, M.
1982-01-01
Techniques for obtaining improved modal vector estimates for systems with closely spaced frequency modes are discussed. In describing the dynamical behavior of a complex structure modal parameters are often analyzed: undamped natural frequency, mode shape, modal mass, modal stiffness and modal damping. From both an analytical standpoint and an experimental standpoint, identification of modal parameters is more difficult if the system has repeated frequencies or even closely spaced frequencies. The more complex the structure, the more likely it is to have closely spaced frequencies. This makes it difficult to determine valid mode shapes using single shaker test methods. By employing band selectable analysis (zoom) techniques and by employing Kennedy-Pancu circle fitting or some multiple degree of freedom (MDOF) curve fit procedure, the usefulness of the single shaker approach can be extended.
NASA Astrophysics Data System (ADS)
Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.
2017-09-01
In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.
Lee, M; Leiter, K; Eisner, C; Breuer, A; Wang, X
2017-09-21
In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.
Cosmology in generalized Proca theories
NASA Astrophysics Data System (ADS)
De Felice, Antonio; Heisenberg, Lavinia; Kase, Ryotaro; Mukohyama, Shinji; Tsujikawa, Shinji; Zhang, Ying-li
2016-06-01
We consider a massive vector field with derivative interactions that propagates only the 3 desired polarizations (besides two tensor polarizations from gravity) with second-order equations of motion in curved space-time. The cosmological implications of such generalized Proca theories are investigated for both the background and the linear perturbation by taking into account the Lagrangian up to quintic order. In the presence of a matter fluid with a temporal component of the vector field, we derive the background equations of motion and show the existence of de Sitter solutions relevant to the late-time cosmic acceleration. We also obtain conditions for the absence of ghosts and Laplacian instabilities of tensor, vector, and scalar perturbations in the small-scale limit. Our results are applied to concrete examples of the general functions in the theory, which encompass vector Galileons as a specific case. In such examples, we show that the de Sitter fixed point is always a stable attractor and study viable parameter spaces in which the no-ghost and stability conditions are satisfied during the cosmic expansion history.
Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong
2015-12-02
For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.
NASA Astrophysics Data System (ADS)
Banshchikova, M. A.; Chuvashov, I. N.; Kuzmin, A. K.; Kruchenitskii, G. M.
2018-05-01
Results of magnetic conjugation of image fragments of auroral emissions at different altitudes along the magnetic field lines and preliminary results of evaluation of their influence on the accuracy of remote mapping of energy characteristics of precipitating electrons are presented. The results are obtained using the code of tracing being an integral part of the software Vector M intended for calculation of accompanying, geophysical, and astronomical information for the center of mass of a space vehicle (SV) and remote observation of aurora by means of Aurovisor-VIS/MP imager onboard the SV Meteor-MP to be launched.
Pattern recognition invariant under changes of scale and orientation
NASA Astrophysics Data System (ADS)
Arsenault, Henri H.; Parent, Sebastien; Moisan, Sylvain
1997-08-01
We have used a modified method proposed by neiberg and Casasent to successfully classify five kinds of military vehicles. The method uses a wedge filter to achieve scale invariance, and lines in a multi-dimensional feature space correspond to each target with out-of-plane orientations over 360 degrees around a vertical axis. The images were not binarized, but were filtered in a preprocessing step to reduce aliasing. The feature vectors were normalized and orthogonalized by means of a neural network. Out-of-plane rotations of 360 degrees and scale changes of a factor of four were considered. Error-free classification was achieved.
Gravitational form factors and decoupling in 2D
NASA Astrophysics Data System (ADS)
Ribeiro, Tiago G.; Shapiro, Ilya L.; Zanusso, Omar
2018-07-01
We calculate and analyse non-local gravitational form factors induced by quantum matter fields in curved two-dimensional space. The calculations are performed for scalars, spinors and massive vectors by means of the covariant heat kernel method up to the second order in the curvature and confirmed using Feynman diagrams. The analysis of the ultraviolet (UV) limit reveals a generalized "running" form of the Polyakov action for a nonminimal scalar field and the usual Polyakov action in the conformally invariant cases. In the infrared (IR) we establish the gravitational decoupling theorem, which can be seen directly from the form factors or from the physical beta function for fields of any spin.
Use of digital control theory state space formalism for feedback at SLC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Himel, T.; Hendrickson, L.; Rouse, F.
The algorithms used in the database-driven SLC fast-feedback system are based on the state space formalism of digital control theory. These are implemented as a set of matrix equations which use a Kalman filter to estimate a vector of states from a vector of measurements, and then apply a gain matrix to determine the actuator settings from the state vector. The matrices used in the calculation are derived offline using Linear Quadratic Gaussian minimization. For a given noise spectrum, this procedure minimizes the rms of the states (e.g., the position or energy of the beam). The offline program also allowsmore » simulation of the loop's response to arbitrary inputs, and calculates its frequency response. 3 refs., 3 figs.« less
A novel double fine guide sensor design on space telescope
NASA Astrophysics Data System (ADS)
Zhang, Xu-xu; Yin, Da-yi
2018-02-01
To get high precision attitude for space telescope, a double marginal FOV (field of view) FGS (Fine Guide Sensor) is proposed. It is composed of two large area APS CMOS sensors and both share the same lens in main light of sight. More star vectors can be get by two FGS and be used for high precision attitude determination. To improve star identification speed, the vector cross product in inter-star angles for small marginal FOV different from traditional way is elaborated and parallel processing method is applied to pyramid algorithm. The star vectors from two sensors are then used to attitude fusion with traditional QUEST algorithm. The simulation results show that the system can get high accuracy three axis attitudes and the scheme is feasibility.
Space-based IR tracking bias removal using background star observations
NASA Astrophysics Data System (ADS)
Clemons, T. M., III; Chang, K. C.
2009-05-01
This paper provides the results of a proposed methodology for removing sensor bias from a space-based infrared (IR) tracking system through the use of stars detected in the background field of the tracking sensor. The tracking system consists of two satellites flying in a lead-follower formation tracking a ballistic target. Each satellite is equipped with a narrow-view IR sensor that provides azimuth and elevation to the target. The tracking problem is made more difficult due to a constant, non-varying or slowly varying bias error present in each sensor's line of sight measurements. As known stars are detected during the target tracking process, the instantaneous sensor pointing error can be calculated as the difference between star detection reading and the known position of the star. The system then utilizes a separate bias filter to estimate the bias value based on these detections and correct the target line of sight measurements to improve the target state vector. The target state vector is estimated through a Linearized Kalman Filter (LKF) for the highly non-linear problem of tracking a ballistic missile. Scenarios are created using Satellite Toolkit(C) for trajectories with associated sensor observations. Mean Square Error results are given for tracking during the period when the target is in view of the satellite IR sensors. The results of this research provide a potential solution to bias correction while simultaneously tracking a target.
Real-time optical laboratory solution of parabolic differential equations
NASA Technical Reports Server (NTRS)
Casasent, David; Jackson, James
1988-01-01
An optical laboratory matrix-vector processor is used to solve parabolic differential equations (the transient diffusion equation with two space variables and time) by an explicit algorithm. This includes optical matrix-vector nonbase-2 encoded laboratory data, the combination of nonbase-2 and frequency-multiplexed data on such processors, a high-accuracy optical laboratory solution of a partial differential equation, new data partitioning techniques, and a discussion of a multiprocessor optical matrix-vector architecture.
Clouding tracing: Visualization of the mixing of fluid elements in convection-diffusion systems
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Philip J.
1993-01-01
This paper describes a highly interactive method for computer visualization of the basic physical process of dispersion and mixing of fluid elements in convection-diffusion systems. It is based on transforming the vector field from a traditionally Eulerian reference frame into a Lagrangian reference frame. Fluid elements are traced through the vector field for the mean path as well as the statistical dispersion of the fluid elements about the mean position by using added scalar information about the root mean square value of the vector field and its Lagrangian time scale. In this way, clouds of fluid elements are traced and are not just mean paths. We have used this method to visualize the simulation of an industrial incinerator to help identify mechanisms for poor mixing.
Climate Change and Vector Borne Diseases on NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Cole, Stuart K.; DeYoung, Russell J.; Shepanek, Marc A.; Kamel, Ahmed
2014-01-01
Increasing global temperature, weather patterns with above average storm intensities, and higher sea levels have been identified as phenomena associated with global climate change. As a causal system, climate change could contribute to vector borne diseases in humans. Vectors of concern originate from the vicinity of Langley Research Center include mosquitos and ticks that transmit disease that originate regionally, nationwide, or from outside the US. Recognizing changing conditions, vector borne diseases propagate under climate change conditions, and understanding the conditions in which they may exist or propagate, presents opportunities for monitoring their progress and mitigating their potential impacts through communication, continued monitoring, and adaptation. Personnel comprise a direct and fundamental support to NASA mission success, continuous and improved understanding of climatic conditions, and the resulting consequence of disease from these conditions, helps to reduce risk in terrestrial space technologies, ground operations, and space research. This research addresses conditions which are attributed to climatic conditions which promote environmental conditions conducive to the increase of disease vectors. This investigation includes evaluation of local mosquito population count and rainfall data for statistical correlation and identification of planning recommendations unique to LaRC, other NASA Centers to assess adaptation approaches, Center-level planning strategies.
Model-based VQ for image data archival, retrieval and distribution
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1995-01-01
An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.
Recchia, Gabriel; Sahlgren, Magnus; Kanerva, Pentti; Jones, Michael N.
2015-01-01
Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics. PMID:25954306
NASA Astrophysics Data System (ADS)
Hecht-Nielsen, Robert
1997-04-01
A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.
A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization
NASA Astrophysics Data System (ADS)
Binz, Ernst; Pods, Sonja
2006-01-01
In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group HX∞. A representation of the C*-group algebra of HX∞ is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.
Vector boson fusion in the inert doublet model
NASA Astrophysics Data System (ADS)
Dutta, Bhaskar; Palacio, Guillermo; Restrepo, Diego; Ruiz-Álvarez, José D.
2018-03-01
In this paper we probe the inert Higgs doublet model at the LHC using vector boson fusion (VBF) search strategy. We optimize the selection cuts and investigate the parameter space of the model and we show that the VBF search has a better reach when compared with the monojet searches. We also investigate the Drell-Yan type cuts and show that they can be important for smaller charged Higgs masses. We determine the 3 σ reach for the parameter space using these optimized cuts for a luminosity of 3000 fb-1 .
Dual-scale topology optoelectronic processor.
Marsden, G C; Krishnamoorthy, A V; Esener, S C; Lee, S H
1991-12-15
The dual-scale topology optoelectronic processor (D-STOP) is a parallel optoelectronic architecture for matrix algebraic processing. The architecture can be used for matrix-vector multiplication and two types of vector outer product. The computations are performed electronically, which allows multiplication and summation concepts in linear algebra to be generalized to various nonlinear or symbolic operations. This generalization permits the application of D-STOP to many computational problems. The architecture uses a minimum number of optical transmitters, which thereby reduces fabrication requirements while maintaining area-efficient electronics. The necessary optical interconnections are space invariant, minimizing space-bandwidth requirements.
Frozen orbit realization using LQR analogy
NASA Astrophysics Data System (ADS)
Nagarajan, N.; Rayan, H. Reno
In the case of remote sensing orbits, the Frozen Orbit concept minimizes altitude variations over a given region using passive means. This is achieved by establishing the mean eccentricity vector at the orbital poles i.e., by fixing the mean argument of perigee at 90 deg with an appropriate eccentricity to balance the perturbations due to zonal harmonics J2 and J3 of the Earth's potential. Eccentricity vector is a vector whose magnitude is the eccentricity and direction is the argument of perigee. The launcher dispersions result in an eccentricity vector which is away from the frozen orbit values. The objective is then to formulate an orbit maneuver strategy to optimize the fuel required to achieve the frozen orbit in the presence of visibility and impulse constraints. It is shown that the motion of the eccentricity vector around the frozen perigee can be approximated as a circle. Combining the circular motion of the eccentricity vector around the frozen point and the maneuver equation, the following discrete equation is obtained. X(k+1) = AX(k) + Bu(k), where X is the state (i.e. eccentricity vector components), A the state transition matrix, u the scalar control force (i.e. dV in this case) and B the control matrix which transforms dV into eccentricity vector change. Based on this, it is shown that the problem of optimizing the fuel can be treated as a Linear Quadratic Regulator (LQR) problem in which the maneuver can be solved by using control system design tools like MATLAB by deriving an analogy LQR design.
2014-01-01
Background PermaNet® 3.0 is an insecticide synergist-combination long-lasting insecticidal net designed to have increased efficacy against malaria vectors with metabolic resistance, even when combined with kdr. The current study reports on the impact of this improved tool on entomological indices in an area with pyrethroid-resistant malaria vectors in Nigeria. Methods Baseline entomological indices across eight villages in Remo North LGA of Ogun State provided the basis for selection of three villages (Ilara, Irolu and Ijesa) for comparing the efficacy of PermaNet® 3.0 (PN3.0), PermaNet® 2.0 (PN2.0) and untreated polyester nets as a control (UTC). In each case, nets were distributed to cover all sleeping spaces and were evaluated for insecticidal activity on a 3-monthly basis. Collection of mosquitoes was conducted monthly via window traps and indoor resting catches. The arithmetic means of mosquito catches per house, entomological inoculation rates before and during the intervention were compared as well as three other outcome parameters: the mean mosquito blood feeding rate, mean mortality and mean parity rates. Results Anopheles gambiae s.l. was the main malaria vector in the three villages, accounting for >98% of the Anopheles population and found in appreciable numbers for 6–7 months. Deltamethrin, permethrin and lambdacyhalothrin resistance were confirmed at Ilara, Irolu and Ijesa. The kdr mutation was the sole resistance mechanism at Ilara, whereas kdr plus P450-based metabolic mechanisms were detected at Irolu and Ijesa. Bioassays repeated on domestically used PN 2.0 and PN 3.0 showed persistent optimal (100%) bio-efficacy for both net types after the 3rd, 6th, 9th and 12th month following net distribution. The use of PN 3.0 significantly reduced mosquito densities with a ‘mass killing’ effect inside houses. Households with PN 3.0 also showed reduced blood feeding as well as lower mosquito parity and sporozoite rates compared to the PN 2.0 and the UTC villages. A significant reduction in the entomological inoculation rate was detected in both the PN 2.0 village (75%) and PN 3.0 village (97%) post LLIN-distribution and not in the UTC village. Conclusion The study confirms the efficacy of PN 3.0 in reducing malaria transmission compared to pyrethroid-only LLINs in the presence of malaria vectors with P450-based metabolic- resistance mechanisms. PMID:24886399
Evaluation and Validation of Operational RapidScat Ocean Surface Vector Winds
NASA Astrophysics Data System (ADS)
Chang, Paul; Jelenak, Zorana; Soisuvarn, Seubson; Said, Faozi; Sienkiewicz, Joseph; Brennan, Michael
2015-04-01
NASA launched RapidScat to the International Space Station (ISS) on September 21, 2014 on a two-year mission to support global monitoring of ocean winds for improved weather forecasting and climate studies. The JPL-developed space-based scatterometer is conically scanning and operates at ku-band (13.4 GHz) similar to QuikSCAT. The ISS-RapidScat's measurement swath is approximately 900 kilometers and covers the majority of the ocean between 51.6 degrees north and south latitude (approximately from north of Vancouver, Canada, to the southern tip of Patagonia) in 48 hours. RapidScat data are currently being posted at a spacing of 25 kilometers, but a version to be released in the near future will improve the postings to 12.5 kilometers. RapidScat ocean surface wind vector data are being provided in near real-time to NOAA, and other operational users such as the U.S. Navy, the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), the Indian Space Research Organisation (ISRO) and the Royal Netherlands Meteorological Institute (KNMI). The quality of the RapidScat OSVW data are assessed by collocating the data in space and time with "truth" data. Typically "truth" data will include, but are not limited to, the NWS global forecast model analysis (GDAS) fields, buoys, ASCAT, WindSat, AMSR-2, and aircraft measurements during hurricane and winter storm experiment flights. The standard statistical analysis used for satellite microwave wind sensors will be utilized to characterize the RapidScat wind vector retrievals. The global numerical weather prediction (NWP) models are a convenient source of "truth" data because they are available 4 times/day globally which results in the accumulation of a large number of collocations over a relatively short amount of time. The NWP model fields are not "truth" in the same way an actual observation would be, however, as long as there are no systematic errors in the NWP model output the collocations will converge in the mean for winds between approximately 3-20 m/s. The NWP models typically do not properly resolve the very low and high wind speeds in part due to limitations of the spatial scales they can account for. Buoy measurements, aircraft-based measurements and other satellite retrievals can be more directly compared on a point-by-point basis. The RapidScat OSVW validation results will be presented and discussed. Utilization examples of these data in support of NOAA's marine weather forecasting and warning mission will also be presented and discussed.
An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions
ERIC Educational Resources Information Center
Radhakrishnan, R.; Choudhury, Askar
2009-01-01
Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…
Maxwell Equations and the Redundant Gauge Degree of Freedom
ERIC Educational Resources Information Center
Wong, Chun Wa
2009-01-01
On transformation to the Fourier space (k,[omega]), the partial differential Maxwell equations simplify to algebraic equations, and the Helmholtz theorem of vector calculus reduces to vector algebraic projections. Maxwell equations and their solutions can then be separated readily into longitudinal and transverse components relative to the…
Pu239 Cross-Section Variations Based on Experimental Uncertainties and Covariances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sigeti, David Edward; Williams, Brian J.; Parsons, D. Kent
2016-10-18
Algorithms and software have been developed for producing variations in plutonium-239 neutron cross sections based on experimental uncertainties and covariances. The varied cross-section sets may be produced as random samples from the multi-variate normal distribution defined by an experimental mean vector and covariance matrix, or they may be produced as Latin-Hypercube/Orthogonal-Array samples (based on the same means and covariances) for use in parametrized studies. The variations obey two classes of constraints that are obligatory for cross-section sets and which put related constraints on the mean vector and covariance matrix that detemine the sampling. Because the experimental means and covariances domore » not obey some of these constraints to sufficient precision, imposing the constraints requires modifying the experimental mean vector and covariance matrix. Modification is done with an algorithm based on linear algebra that minimizes changes to the means and covariances while insuring that the operations that impose the different constraints do not conflict with each other.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmid, Christoph
We show that there is exact dragging of the axis directions of local inertial frames by a weighted average of the cosmological energy currents via gravitomagnetism for all linear perturbations of all Friedmann-Robertson-Walker (FRW) universes and of Einstein's static closed universe, and for all energy-momentum-stress tensors and in the presence of a cosmological constant. This includes FRW universes arbitrarily close to the Milne Universe and the de Sitter universe. Hence the postulate formulated by Ernst Mach about the physical cause for the time-evolution of inertial axes is shown to hold in general relativity for linear perturbations of FRW universes. -more » The time-evolution of local inertial axes (relative to given local fiducial axes) is given experimentally by the precession angular velocity {omega}-vector{sub gyro} of local gyroscopes, which in turn gives the operational definition of the gravitomagnetic field: B-vector{sub g}{identical_to}-2{omega}-vector{sub gyro}. The gravitomagnetic field is caused by energy currents J-vector{sub {epsilon}} via the momentum constraint, Einstein's G{sup 0-}circumflex{sub i-circumflex} equation, (-{delta}+{mu}{sup 2})A-vector{sub g}=-16{pi}G{sub N}J-vector{sub {epsilon}} with B-vector{sub g}=curl A-vector{sub g}. This equation is analogous to Ampere's law, but it holds for all time-dependent situations. {delta} is the de Rham-Hodge Laplacian, and {delta}=-curl curl for the vorticity sector in Riemannian 3-space. - In the solution for an open universe the 1/r{sup 2}-force of Ampere is replaced by a Yukawa force Y{sub {mu}}(r)=(-d/dr)[(1/R)exp(-{mu}r)], form-identical for FRW backgrounds with K=(-1,0). Here r is the measured geodesic distance from the gyroscope to the cosmological source, and 2{pi}R is the measured circumference of the sphere centered at the gyroscope and going through the source point. The scale of the exponential cutoff is the H-dot radius, where H is the Hubble rate, dot is the derivative with respect to cosmic time, and {mu}{sup 2}=-4(dH/dt). Analogous results hold in closed FRW universes and in Einstein's closed static universe.--We list six fundamental tests for the principle formulated by Mach: all of them are explicitly fulfilled by our solutions.--We show that only energy currents in the toroidal vorticity sector with l=1 can affect the precession of gyroscopes. We show that the harmonic decomposition of toroidal vorticity fields in terms of vector spherical harmonics X-vector{sub lm}{sup -} has radial functions which are form-identical for the 3-sphere, the hyperbolic 3-space, and Euclidean 3-space, and are form-identical with the spherical Bessel-, Neumann-, and Hankel functions. - The Appendix gives the de Rham-Hodge Laplacian on vorticity fields in Riemannian 3-spaces by equations connecting the calculus of differential forms with the curl notation. We also give the derivation the Weitzenboeck formula for the difference between the de Rham-Hodge Laplacian {delta} and the ''rough'' Laplacian {nabla}{sup 2} on vector fields.« less
Wong, Gwendolyn K L; Jim, C Y
2016-12-15
Green roof, an increasingly common constituent of urban green infrastructure, can provide multiple ecosystem services and mitigate climate-change and urban-heat-island challenges. Its adoption has been beset by a longstanding preconception of attracting urban pests like mosquitoes. As more cities may become vulnerable to emerging and re-emerging mosquito-borne infectious diseases, the knowledge gap needs to be filled. This study gauges the habitat preference of vector mosquitoes for extensive green roofs vis-à-vis positive and negative control sites in an urban setting. Seven sites in a university campus were selected to represent three experimental treatments: green roofs (GR), ground-level blue-green spaces as positive controls (PC), and bare roofs as negative controls (NC). Mosquito-trapping devices were deployed for a year from March 2015 to 2016. Human-biting mosquito species known to transmit infectious diseases in the region were identified and recorded as target species. Generalized linear models evaluated the effects of site type, season, and weather on vector-mosquito abundance. Our model revealed site type as a significant predictor of vector mosquito abundance, with considerably more vector mosquitoes captured in PC than in GR and NC. Vector abundance was higher in NC than in GR, attributed to the occasional presence of water pools in depressions of roofing membrane after rainfall. Our data also demonstrated seasonal differences in abundance. Weather variables were evaluated to assess human-vector contact risks under different weather conditions. Culex quinquefasciatus, a competent vector of diseases including lymphatic filariasis and West Nile fever, could be the most adaptable species. Our analysis demonstrates that green roofs are not particularly preferred by local vector mosquitoes compared to bare roofs and other urban spaces in a humid subtropical setting. The findings call for a better understanding of vector ecology in diverse urban landscapes to improve disease control efficacy amidst surging urbanization and changing climate. Copyright © 2016 Elsevier B.V. All rights reserved.
Maggi, Federico; Bosco, Domenico; Galetto, Luciana; Palmano, Sabrina; Marzachì, Cristina
2017-01-01
Analyses of space-time statistical features of a flavescence dorée (FD) epidemic in Vitis vinifera plants are presented. FD spread was surveyed from 2011 to 2015 in a vineyard of 17,500 m2 surface area in the Piemonte region, Italy; count and position of symptomatic plants were used to test the hypothesis of epidemic Complete Spatial Randomness and isotropicity in the space-time static (year-by-year) point pattern measure. Space-time dynamic (year-to-year) point pattern analyses were applied to newly infected and recovered plants to highlight statistics of FD progression and regression over time. Results highlighted point patterns ranging from disperse (at small scales) to aggregated (at large scales) over the years, suggesting that the FD epidemic is characterized by multiscale properties that may depend on infection incidence, vector population, and flight behavior. Dynamic analyses showed moderate preferential progression and regression along rows. Nearly uniform distributions of direction and negative exponential distributions of distance of newly symptomatic and recovered plants relative to existing symptomatic plants highlighted features of vector mobility similar to Brownian motion. These evidences indicate that space-time epidemics modeling should include environmental setting (e.g., vineyard geometry and topography) to capture anisotropicity as well as statistical features of vector flight behavior, plant recovery and susceptibility, and plant mortality. PMID:28111581
Test spaces and characterizations of quadratic spaces
NASA Astrophysics Data System (ADS)
Dvurečenskij, Anatolij
1996-10-01
We show that a test space consisting of nonzero vectors of a quadratic space E and of the set all maximal orthogonal systems in E is algebraic iff E is Dacey or, equivalently, iff E is orthomodular. In addition, we present another orthomodularity criteria of quadratic spaces, and using the result of Solèr, we show that they can imply that E is a real, complex, or quaternionic Hilbert space.
Blending Velocities In Task Space In Computing Robot Motions
NASA Technical Reports Server (NTRS)
Volpe, Richard A.
1995-01-01
Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.
Sample levitation and melt in microgravity
NASA Technical Reports Server (NTRS)
Moynihan, Philip I. (Inventor)
1990-01-01
A system is described for maintaining a sample material in a molten state and away from the walls of a container in a microgravity environment, as in a space vehicle. A plurality of sources of electromagnetic radiation, such as an infrared wavelength, are spaced about the object, with the total net electromagnetic radiation applied to the object being sufficient to maintain it in a molten state, and with the vector sum of the applied radiation being in a direction to maintain the sample close to a predetermined location away from the walls of a container surrounding the sample. For a processing system in a space vehicle that orbits the Earth, the net radiation vector is opposite the velocity of the orbiting vehicle.
Sample levitation and melt in microgravity
NASA Technical Reports Server (NTRS)
Moynihan, Philip I. (Inventor)
1987-01-01
A system is described for maintaining a sample material in a molten state and away from the walls of a container in a microgravity environment, as in a space vehicle. A plurality of sources of electromagnetic radiation, such as of an infrared wavelength, are spaced about the object, with the total net electromagnetic radiation applied to the object being sufficient to maintain it in a molten state, and with the vector sum of the applied radiation being in a direction to maintain the sample close to a predetermined location away from the walls of a container surrounding the sample. For a processing system in a space vehicle that orbits the Earth, the net radiation vector is opposite the velocity of the orbiting vehicle.
NASA Astrophysics Data System (ADS)
Amaral, J. T.; Becker, V. M.
2018-05-01
We investigate ρ vector meson production in e p collisions at HERA with leading neutrons in the dipole formalism. The interaction of the dipole and the pion is described in a mixed-space approach, in which the dipole-pion scattering amplitude is given by the Marquet-Peschanski-Soyez saturation model, which is based on the traveling wave solutions of the nonlinear Balitsky-Kovchegov equation. We estimate the magnitude of the absorption effects and compare our results with a previous analysis of the same process in full coordinate space. In contrast with this approach, the present study leads to absorption K factors in the range of those predicted by previous theoretical studies on semi-inclusive processes.
NASA Astrophysics Data System (ADS)
Hertog, Thomas; Tartaglino-Mazzucchelli, Gabriele; Van Riet, Thomas; Venken, Gerben
2018-02-01
We put forward new explicit realisations of dS/CFT that relate N = 2 supersymmetric Euclidean vector models with reversed spin-statistics in three dimensions to specific supersymmetric Vasiliev theories in four-dimensional de Sitter space. The partition function of the free supersymmetric vector model deformed by a range of low spin deformations that preserve supersymmetry appears to specify a well-defined wave function with asymptotic de Sitter boundary conditions in the bulk. In particular we find the wave function is globally peaked at undeformed de Sitter space, with a low amplitude for strong deformations. This suggests that supersymmetric de Sitter space is stable in higher-spin gravity and in particular free from ghosts. We speculate this is a limiting case of the de Sitter realizations in exotic string theories.
Interacting vector fields in relativity without relativity
NASA Astrophysics Data System (ADS)
Anderson, Edward; Barbour, Julian
2002-06-01
Barbour, Foster and Ó Murchadha have recently developed a new framework, called here the 3-space approach, for the formulation of classical bosonic dynamics. Neither time nor a locally Minkowskian structure of spacetime are presupposed. Both arise as emergent features of the world from geodesic-type dynamics on a space of three-dimensional metric-matter configurations. In fact gravity, the universal light-cone and Abelian gauge theory minimally coupled to gravity all arise naturally through a single common mechanism. It yields relativity - and more - without presupposing relativity. This paper completes the recovery of the presently known bosonic sector within the 3-space approach. We show, for a rather general ansatz, that 3-vector fields can interact among themselves only as Yang-Mills fields minimally coupled to gravity.
Intertwined Hamiltonians in two-dimensional curved spaces
NASA Astrophysics Data System (ADS)
Aghababaei Samani, Keivan; Zarei, Mina
2005-04-01
The problem of intertwined Hamiltonians in two-dimensional curved spaces is investigated. Explicit results are obtained for Euclidean plane, Minkowski plane, Poincaré half plane (AdS2), de Sitter plane (dS2), sphere, and torus. It is shown that the intertwining operator is related to the Killing vector fields and the isometry group of corresponding space. It is shown that the intertwined potentials are closely connected to the integral curves of the Killing vector fields. Two problems are considered as applications of the formalism presented in the paper. The first one is the problem of Hamiltonians with equispaced energy levels and the second one is the problem of Hamiltonians whose spectrum is like the spectrum of a free particle.
Regular and Chaotic Spatial Distribution of Bose-Einstein Condensed Atoms in a Ratchet Potential
NASA Astrophysics Data System (ADS)
Li, Fei; Xu, Lan; Li, Wenwu
2018-02-01
We study the regular and chaotic spatial distribution of Bose-Einstein condensed atoms with a space-dependent nonlinear interaction in a ratchet potential. There exists in the system a space-dependent atomic current that can be tuned via Feshbach resonance technique. In the presence of the space-dependent atomic current and a weak ratchet potential, the Smale-horseshoe chaos is studied and the Melnikov chaotic criterion is obtained. Numerical simulations show that the ratio between the intensities of optical potentials forming the ratchet potential, the wave vector of the laser producing the ratchet potential or the wave vector of the modulating laser can be chosen as the controlling parameters to result in or avoid chaotic spatial distributional states.
A static investigation of the thrust vectoring system of the F/A-18 high-alpha research vehicle
NASA Technical Reports Server (NTRS)
Mason, Mary L.; Capone, Francis J.; Asbury, Scott C.
1992-01-01
A static (wind-off) test was conducted in the static test facility of the Langley 16-foot Transonic Tunnel to evaluate the vectoring capability and isolated nozzle performance of the proposed thrust vectoring system of the F/A-18 high alpha research vehicle (HARV). The thrust vectoring system consisted of three asymmetrically spaced vanes installed externally on a single test nozzle. Two nozzle configurations were tested: A maximum afterburner-power nozzle and a military-power nozzle. Vane size and vane actuation geometry were investigated, and an extensive matrix of vane deflection angles was tested. The nozzle pressure ratios ranged from two to six. The results indicate that the three vane system can successfully generate multiaxis (pitch and yaw) thrust vectoring. However, large resultant vector angles incurred large thrust losses. Resultant vector angles were always lower than the vane deflection angles. The maximum thrust vectoring angles achieved for the military-power nozzle were larger than the angles achieved for the maximum afterburner-power nozzle.
Srivastava, Preeti; Deb, J K
2002-07-02
A series of fusion vectors containing glutathione-S-transferase (GST) were constructed by inserting GST fusion cassette of Escherichia coli vectors pGEX4T-1, -2 and -3 in corynebacterial vector pBK2. Efficient expression of GST driven by inducible tac promoter of E. coli was observed in Corynebacterium acetoacidophilum. Fusion of enhanced green fluorescent protein (EGFP) and streptokinase genes in this vector resulted in the synthesis of both the fusion proteins. The ability of this recombinant organism to produce several-fold more of the product in the extracellular medium than in the intracellular space would make this system quite attractive as far as the downstream processing of the product is concerned.
NASA Technical Reports Server (NTRS)
Lallemand, Pierre; Luo, Li-Shi
2000-01-01
The generalized hydrodynamics (the wave vector dependence of the transport coefficients) of a generalized lattice Boltzmann equation (LBE) is studied in detail. The generalized lattice Boltzmann equation is constructed in moment space rather than in discrete velocity space. The generalized hydrodynamics of the model is obtained by solving the dispersion equation of the linearized LBE either analytically by using perturbation technique or numerically. The proposed LBE model has a maximum number of adjustable parameters for the given set of discrete velocities. Generalized hydrodynamics characterizes dispersion, dissipation (hyper-viscosities), anisotropy, and lack of Galilean invariance of the model, and can be applied to select the values of the adjustable parameters which optimize the properties of the model. The proposed generalized hydrodynamic analysis also provides some insights into stability and proper initial conditions for LBE simulations. The stability properties of some 2D LBE models are analyzed and compared with each other in the parameter space of the mean streaming velocity and the viscous relaxation time. The procedure described in this work can be applied to analyze other LBE models. As examples, LBE models with various interpolation schemes are analyzed. Numerical results on shear flow with an initially discontinuous velocity profile (shock) with or without a constant streaming velocity are shown to demonstrate the dispersion effects in the LBE model; the results compare favorably with our theoretical analysis. We also show that whereas linear analysis of the LBE evolution operator is equivalent to Chapman-Enskog analysis in the long wave-length limit (wave vector k = 0), it can also provide results for large values of k. Such results are important for the stability and other hydrodynamic properties of the LBE method and cannot be obtained through Chapman-Enskog analysis.
NASA Technical Reports Server (NTRS)
Fichtl, G. H.; Holland, R. L.
1978-01-01
A stochastic model of spacecraft motion was developed based on the assumption that the net torque vector due to crew activity and rocket thruster firings is a statistically stationary Gaussian vector process. The process had zero ensemble mean value, and the components of the torque vector were mutually stochastically independent. The linearized rigid-body equations of motion were used to derive the autospectral density functions of the components of the spacecraft rotation vector. The cross-spectral density functions of the components of the rotation vector vanish for all frequencies so that the components of rotation were mutually stochastically independent. The autospectral and cross-spectral density functions of the induced gravity environment imparted to scientific apparatus rigidly attached to the spacecraft were calculated from the rotation rate spectral density functions via linearized inertial frame to body-fixed principal axis frame transformation formulae. The induced gravity process was a Gaussian one with zero mean value. Transformation formulae were used to rotate the principal axis body-fixed frame to which the rotation rate and induced gravity vector were referred to a body-fixed frame in which the components of the induced gravity vector were stochastically independent. Rice's theory of exceedances was used to calculate expected exceedance rates of the components of the rotation and induced gravity vector processes.
Region-based automatic building and forest change detection on Cartosat-1 stereo imagery
NASA Astrophysics Data System (ADS)
Tian, J.; Reinartz, P.; d'Angelo, P.; Ehlers, M.
2013-05-01
In this paper a novel region-based method is proposed for change detection using space borne panchromatic Cartosat-1 stereo imagery. In the first step, Digital Surface Models (DSMs) from two dates are generated by semi-global matching. The geometric lateral resolution of the DSMs is 5 m × 5 m and the height accuracy is in the range of approximately 3 m (RMSE). In the second step, mean-shift segmentation is applied on the orthorectified images of two dates to obtain initial regions. A region intersection following a merging strategy is proposed to get minimum change regions and multi-level change vectors are extracted for these regions. Finally change detection is achieved by combining these features with weighted change vector analysis. The result evaluations demonstrate that the applied DSM generation method is well suited for Cartosat-1 imagery, and the extracted height values can largely improve the change detection accuracy, moreover it is shown that the proposed change detection method can be used robustly for both forest and industrial areas.
Betatron motion with coupling of horizontal and vertical degrees of freedom
Lebedev, V. A.; Bogacz, S. A.
2010-10-21
Presently, there are two most frequently used parameterezations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is descrabed by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. In addition, considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev’s vertex-to-plane adapter.« less
Betatron motion with coupling of horizontal and vertical degrees of freedom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebedev, V. A.; Bogacz, S. A.
Presently, there are two most frequently used parameterezations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is descrabed by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. In addition, considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev’s vertex-to-plane adapter.« less
Betatron motion with coupling of horizontal and vertical degrees of freedom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebedev, V.A.; /Fermilab; Bogacz, S.A.
Presently, there are two most frequently used parameterizations of linear x-y coupled motion used in the accelerator physics. They are the Edwards-Teng and Mais-Ripken parameterizations. The article is devoted to an analysis of close relationship between the two representations, thus adding a clarity to their physical meaning. It also discusses the relationship between the eigen-vectors, the beta-functions, second order moments and the bilinear form representing the particle ellipsoid in the 4D phase space. Then, it consideres a further development of Mais-Ripken parameteresation where the particle motion is described by 10 parameters: four beta-functions, four alpha-functions and two betatron phase advances.more » In comparison with Edwards-Teng parameterization the chosen parametrization has an advantage that it works equally well for analysis of coupled betatron motion in circular accelerators and in transfer lines. Considered relationship between second order moments, eigen-vectors and beta-functions can be useful in interpreting tracking results and experimental data. As an example, the developed formalizm is applied to the FNAL electron cooler and Derbenev's vertex-to-plane adapter.« less
Rapid Temporal Changes of Boundary Layer Winds
NASA Technical Reports Server (NTRS)
Merceret, Francis J.
2005-01-01
The statistical distribution of the magnitude of the vector wind change over 0.25, 0.5, 1 and 2-h periods based on data from November 1999 through August 2001 is presented. The distributions of the 2-h u and v component wind changes are also presented for comparison. The wind changes at altitudes from 500 to 3000 m were measured using the Eastern Range network of five 915 MHz Doppler radar wind profilers. Quality controlled profiles were produced every 15 minutes for up to sixty gates, each representing 101 m in altitude over the range from 130 m to 6089 m. Five levels, each constituting three consecutive gates, were selected for analysis because of their significance to aerodynamic loads during the Space Shuttle ascent roll maneuver. The distribution of the magnitude of the vector wind change is found to be lognormal consistent with earlier work in the mid-troposphere. The parameters of the distribution vary with time lag, season and altitude. The component wind changes are symmetrically distributed with near-zero means, but the kurtosis coefficient is larger than that of a Gaussian distribution.
Adjudicating between face-coding models with individual-face fMRI responses
Kriegeskorte, Nikolaus
2017-01-01
The perceptual representation of individual faces is often explained with reference to a norm-based face space. In such spaces, individuals are encoded as vectors where identity is primarily conveyed by direction and distinctiveness by eccentricity. Here we measured human fMRI responses and psychophysical similarity judgments of individual face exemplars, which were generated as realistic 3D animations using a computer-graphics model. We developed and evaluated multiple neurobiologically plausible computational models, each of which predicts a representational distance matrix and a regional-mean activation profile for 24 face stimuli. In the fusiform face area, a face-space coding model with sigmoidal ramp tuning provided a better account of the data than one based on exemplar tuning. However, an image-processing model with weighted banks of Gabor filters performed similarly. Accounting for the data required the inclusion of a measurement-level population averaging mechanism that approximates how fMRI voxels locally average distinct neuronal tunings. Our study demonstrates the importance of comparing multiple models and of modeling the measurement process in computational neuroimaging. PMID:28746335
Quantitative analysis of eyes and other optical systems in linear optics.
Harris, William F; Evans, Tanya; van Gool, Radboud D
2017-05-01
To show that 14-dimensional spaces of augmented point P and angle Q characteristics, matrices obtained from the ray transference, are suitable for quantitative analysis although only the latter define an inner-product space and only on it can one define distances and angles. The paper examines the nature of the spaces and their relationships to other spaces including symmetric dioptric power space. The paper makes use of linear optics, a three-dimensional generalization of Gaussian optics. Symmetric 2 × 2 dioptric power matrices F define a three-dimensional inner-product space which provides a sound basis for quantitative analysis (calculation of changes, arithmetic means, etc.) of refractive errors and thin systems. For general systems the optical character is defined by the dimensionally-heterogeneous 4 × 4 symplectic matrix S, the transference, or if explicit allowance is made for heterocentricity, the 5 × 5 augmented symplectic matrix T. Ordinary quantitative analysis cannot be performed on them because matrices of neither of these types constitute vector spaces. Suitable transformations have been proposed but because the transforms are dimensionally heterogeneous the spaces are not naturally inner-product spaces. The paper obtains 14-dimensional spaces of augmented point P and angle Q characteristics. The 14-dimensional space defined by the augmented angle characteristics Q is dimensionally homogenous and an inner-product space. A 10-dimensional subspace of the space of augmented point characteristics P is also an inner-product space. The spaces are suitable for quantitative analysis of the optical character of eyes and many other systems. Distances and angles can be defined in the inner-product spaces. The optical systems may have multiple separated astigmatic and decentred refracting elements. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
Barr, Andrew J.; Dube, Bright; Hensor, Elizabeth M. A.; Kingsbury, Sarah R.; Peat, George; Bowes, Mike A.; Sharples, Linda D.
2016-01-01
Objective. There is growing understanding of the importance of bone in OA. Our aim was to determine the relationship between 3D MRI bone shape and total knee replacement (TKR). Methods. A nested case-control study within the Osteoarthritis Initiative cohort identified case knees with confirmed TKR for OA and controls that were matched using propensity scores. Active appearance modelling quantification of the bone shape of all knee bones identified vectors between knees having or not having OA. Vectors were scaled such that −1 and +1 represented the mean non-OA and mean OA shapes. Results. Compared to controls (n = 310), TKR cases (n = 310) had a more positive mean baseline 3D bone shape vector, indicating more advanced structural OA, for the femur [mean 0.98 vs −0.11; difference (95% CI) 1.10 (0.88, 1.31)], tibia [mean 0.86 vs −0.07; difference (95% CI) 0.94 (0.72, 1.16)] and patella [mean 0.95 vs 0.03; difference (95% CI) 0.92 (0.65, 1.20)]. Odds ratios (95% CI) for TKR per normalized unit of 3D bone shape vector for the femur, tibia and patella were: 1.85 (1.59, 2.16), 1.64 (1.42, 1.89) and 1.36 (1.22, 1.50), respectively, all P < 0.001. After including Kellgren–Lawrence grade in a multivariable analysis, only the femur 3D shape vector remained significantly associated with TKR [odds ratio 1.24 (1.02, 1.51)]. Conclusion. 3D bone shape was associated with the endpoint of this study, TKR, with femoral shape being most associated. This study contributes to the validation of quantitative MRI bone biomarkers for OA structure-modification trials. PMID:27185958
Ko, Mi-Hwa
2018-01-01
In this paper, we obtain the Hájek-Rényi inequality and, as an application, we study the strong law of large numbers for H -valued m -asymptotically almost negatively associated random vectors with mixing coefficients [Formula: see text] such that [Formula: see text].
Improved dynamic analysis method using load-dependent Ritz vectors
NASA Technical Reports Server (NTRS)
Escobedo-Torres, J.; Ricles, J. M.
1993-01-01
The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.
Energy theorem for (2+1)-dimensional gravity.
NASA Astrophysics Data System (ADS)
Menotti, P.; Seminara, D.
1995-05-01
We prove a positive energy theorem in (2+1)-dimensional gravity for open universes and any matter energy-momentum tensor satisfying the dominant energy condition. We consider on the space-like initial value surface a family of widening Wilson loops and show that the energy-momentum of the enclosed subsystem is a future directed time-like vector whose mass is an increasing function of the loop, until it reaches the value 1/4G corresponding to a deficit angle of 2π. At this point the energy-momentum of the system evolves, depending on the nature of a zero norm vector appearing in the evolution equations, either into a time-like vector of a universe which closes kinematically or into a Gott-like universe whose energy momentum vector, as first recognized by Deser, Jackiw, and 't Hooft (1984) is space-like. This treatment generalizes results obtained by Carroll, Fahri, Guth, and Olum (1994) for a system of point-like spinless particle, to the most general form of matter whose energy-momentum tensor satisfies the dominant energy condition. The treatment is also given for the anti-de Sitter (2+1)-dimensional gravity.
Light weakly coupled axial forces: models, constraints, and projections
Kahn, Yonatan; Krnjaic, Gordan; Mishra-Sharma, Siddharth; ...
2017-05-01
Here, we investigate the landscape of constraints on MeV-GeV scale, hidden U(1) forces with nonzero axial-vector couplings to Standard Model fermions. While the purely vector-coupled dark photon, which may arise from kinetic mixing, is a well-motivated scenario, several MeV-scale anomalies motivate a theory with axial couplings which can be UV-completed consistent with Standard Model gauge invariance. Moreover, existing constraints on dark photons depend on products of various combinations of axial and vector couplings, making it difficult to isolate the e ects of axial couplings for particular flavors of SM fermions. We present a representative renormalizable, UV-complete model of a darkmore » photon with adjustable axial and vector couplings, discuss its general features, and show how some UV constraints may be relaxed in a model with nonrenormalizable Yukawa couplings at the expense of fine-tuning. We survey the existing parameter space and the projected reach of planned experiments, brie y commenting on the relevance of the allowed parameter space to low-energy anomalies in π 0 and 8Be* decay.« less
Ghorai, Santanu; Mukherjee, Anirban; Dutta, Pranab K
2010-06-01
In this brief we have proposed the multiclass data classification by computationally inexpensive discriminant analysis through vector-valued regularized kernel function approximation (VVRKFA). VVRKFA being an extension of fast regularized kernel function approximation (FRKFA), provides the vector-valued response at single step. The VVRKFA finds a linear operator and a bias vector by using a reduced kernel that maps a pattern from feature space into the low dimensional label space. The classification of patterns is carried out in this low dimensional label subspace. A test pattern is classified depending on its proximity to class centroids. The effectiveness of the proposed method is experimentally verified and compared with multiclass support vector machine (SVM) on several benchmark data sets as well as on gene microarray data for multi-category cancer classification. The results indicate the significant improvement in both training and testing time compared to that of multiclass SVM with comparable testing accuracy principally in large data sets. Experiments in this brief also serve as comparison of performance of VVRKFA with stratified random sampling and sub-sampling.
Eisen, Lars; Lozano-Fuentes, Saul
2009-01-01
The aims of this review paper are to 1) provide an overview of how mapping and spatial and space-time modeling approaches have been used to date to visualize and analyze mosquito vector and epidemiologic data for dengue; and 2) discuss the potential for these approaches to be included as routine activities in operational vector and dengue control programs. Geographical information system (GIS) software are becoming more user-friendly and now are complemented by free mapping software that provide access to satellite imagery and basic feature-making tools and have the capacity to generate static maps as well as dynamic time-series maps. Our challenge is now to move beyond the research arena by transferring mapping and GIS technologies and spatial statistical analysis techniques in user-friendly packages to operational vector and dengue control programs. This will enable control programs to, for example, generate risk maps for exposure to dengue virus, develop Priority Area Classifications for vector control, and explore socioeconomic associations with dengue risk. PMID:19399163
Light weakly coupled axial forces: models, constraints, and projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahn, Yonatan; Krnjaic, Gordan; Mishra-Sharma, Siddharth
Here, we investigate the landscape of constraints on MeV-GeV scale, hidden U(1) forces with nonzero axial-vector couplings to Standard Model fermions. While the purely vector-coupled dark photon, which may arise from kinetic mixing, is a well-motivated scenario, several MeV-scale anomalies motivate a theory with axial couplings which can be UV-completed consistent with Standard Model gauge invariance. Moreover, existing constraints on dark photons depend on products of various combinations of axial and vector couplings, making it difficult to isolate the e ects of axial couplings for particular flavors of SM fermions. We present a representative renormalizable, UV-complete model of a darkmore » photon with adjustable axial and vector couplings, discuss its general features, and show how some UV constraints may be relaxed in a model with nonrenormalizable Yukawa couplings at the expense of fine-tuning. We survey the existing parameter space and the projected reach of planned experiments, brie y commenting on the relevance of the allowed parameter space to low-energy anomalies in π 0 and 8Be* decay.« less
Cosmology in generalized Proca theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felice, Antonio De; Mukohyama, Shinji; Heisenberg, Lavinia
2016-06-01
We consider a massive vector field with derivative interactions that propagates only the 3 desired polarizations (besides two tensor polarizations from gravity) with second-order equations of motion in curved space-time. The cosmological implications of such generalized Proca theories are investigated for both the background and the linear perturbation by taking into account the Lagrangian up to quintic order. In the presence of a matter fluid with a temporal component of the vector field, we derive the background equations of motion and show the existence of de Sitter solutions relevant to the late-time cosmic acceleration. We also obtain conditions for themore » absence of ghosts and Laplacian instabilities of tensor, vector, and scalar perturbations in the small-scale limit. Our results are applied to concrete examples of the general functions in the theory, which encompass vector Galileons as a specific case. In such examples, we show that the de Sitter fixed point is always a stable attractor and study viable parameter spaces in which the no-ghost and stability conditions are satisfied during the cosmic expansion history.« less
Prediction of hourly PM2.5 using a space-time support vector regression model
NASA Astrophysics Data System (ADS)
Yang, Wentao; Deng, Min; Xu, Feng; Wang, Hang
2018-05-01
Real-time air quality prediction has been an active field of research in atmospheric environmental science. The existing methods of machine learning are widely used to predict pollutant concentrations because of their enhanced ability to handle complex non-linear relationships. However, because pollutant concentration data, as typical geospatial data, also exhibit spatial heterogeneity and spatial dependence, they may violate the assumptions of independent and identically distributed random variables in most of the machine learning methods. As a result, a space-time support vector regression model is proposed to predict hourly PM2.5 concentrations. First, to address spatial heterogeneity, spatial clustering is executed to divide the study area into several homogeneous or quasi-homogeneous subareas. To handle spatial dependence, a Gauss vector weight function is then developed to determine spatial autocorrelation variables as part of the input features. Finally, a local support vector regression model with spatial autocorrelation variables is established for each subarea. Experimental data on PM2.5 concentrations in Beijing are used to verify whether the results of the proposed model are superior to those of other methods.
Calderone, G.J.; Butler, R.F.
1991-01-01
Random tilting of a single paleomagnetic vector produces a distribution of vectors which is not rotationally symmetric about the original vector and therefore not Fisherian. Monte Carlo simulations were performed on two types of vector distributions: 1) distributions of vectors formed by perturbing a single original vector with a Fisher distribution of bedding poles (each defining a tilt correction) and 2) standard Fisher distributions. These simulations demonstrate that inclinations of vectors drawn from both distributions are biased toward shallow inclinations. The Fisher mean direction of the distribution of vectors formed by perturbing a single vector with random undetected tilts is biased toward shallow inclinations, but this bias is insignificant for angular dispersions of bedding poles less than 20??. -from Authors
Knick, Steven T.; Rotenberry, J.T.
1998-01-01
We tested the potential of a GIS mapping technique, using a resource selection model developed for black-tailed jackrabbits (Lepus californicus) and based on the Mahalanobis distance statistic, to track changes in shrubsteppe habitats in southwestern Idaho. If successful, the technique could be used to predict animal use areas, or those undergoing change, in different regions from the same selection function and variables without additional sampling. We determined the multivariate mean vector of 7 GIS variables that described habitats used by jackrabbits. We then ranked the similarity of all cells in the GIS coverage from their Mahalanobis distance to the mean habitat vector. The resulting map accurately depicted areas where we sighted jackrabbits on verification surveys. We then simulated an increase in shrublands (which are important habitats). Contrary to expectation, the new configurations were classified as lower similarity relative to the original mean habitat vector. Because the selection function is based on a unimodal mean, any deviation, even if biologically positive, creates larger Malanobis distances and lower similarity values. We recommend the Mahalanobis distance technique for mapping animal use areas when animals are distributed optimally, the landscape is well-sampled to determine the mean habitat vector, and distributions of the habitat variables does not change.
1992-02-01
Division (Code RM) ONERA Office of Aeronautics & Space Technology 29 ave de la Division Leclerc NASA Hq 92320 Chfitillon Washington DC 20546 France United...Vector of thickness variables. V’ = [ t2 ........ tN Vector of thickness changes. AV ’= [rt, 5t2 ......... tNJ TI 7-9 Vector of strain derivatives. F...ds, ds, I d, 1i’,= dt, dr2 ........ dt--N Vector of buckling derivatives. dX d). , dt1 dt2 dtN Then 5F= Vs’i . AV and SX V,’. AV The linearised
NASA Technical Reports Server (NTRS)
Mcpherron, R. L.
1977-01-01
Procedures are described for the calibration of a vector magnetometer of high absolute accuracy. It is assumed that the calibration will be performed in the magnetic test facility of Goddard Space Flight Center (GSFC). The first main section of the report describes the test equipment and facility calibrations required. The second presents procedures for calibrating individual sensors. The third discusses the calibration of the sensor assembly. In a final section recommendations are made to GSFC for modification of the test facility required to carry out the calibration procedures.
NASA Technical Reports Server (NTRS)
Bown, R. L.; Christofferson, A.; Lardas, M.; Flanders, H.
1980-01-01
A lambda matrix solution technique is being developed to perform an open loop frequency analysis of a high order dynamic system. The procedure evaluates the right and left latent vectors corresponding to the respective latent roots. The latent vectors are used to evaluate the partial fraction expansion formulation required to compute the flexible body open loop feedback gains for the Space Shuttle Digital Ascent Flight Control System. The algorithm is in the final stages of development and will be used to insure that the feedback gains meet the design specification.
Vector space methods of photometric analysis - Applications to O stars and interstellar reddening
NASA Technical Reports Server (NTRS)
Massa, D.; Lillie, C. F.
1978-01-01
A multivariate vector-space formulation of photometry is developed which accounts for error propagation. An analysis of uvby and H-beta photometry of O stars is presented, with attention given to observational errors, reddening, general uvby photometry, early stars, and models of O stars. The number of observable parameters in O-star continua is investigated, the way these quantities compare with model-atmosphere predictions is considered, and an interstellar reddening law is derived. It is suggested that photospheric expansion affects the formation of the continuum in at least some O stars.
Felisberto, Paulo; Rodriguez, Orlando; Santos, Paulo; Ey, Emanuel; Jesus, Sérgio M.
2013-01-01
This paper aims at estimating the azimuth, range and depth of a cooperative broadband acoustic source with a single vector sensor in a multipath underwater environment, where the received signal is assumed to be a linear combination of echoes of the source emitted waveform. A vector sensor is a device that measures the scalar acoustic pressure field and the vectorial acoustic particle velocity field at a single location in space. The amplitudes of the echoes in the vector sensor components allow one to determine their azimuth and elevation. Assuming that the environmental conditions of the channel are known, source range and depth are obtained from the estimates of elevation and relative time delays of the different echoes using a ray-based backpropagation algorithm. The proposed method is tested using simulated data and is further applied to experimental data from the Makai'05 experiment, where 8–14 kHz chirp signals were acquired by a vector sensor array. It is shown that for short ranges, the position of the source is estimated in agreement with the geometry of the experiment. The method is low computational demanding, thus well-suited to be used in mobile and light platforms, where space and power requirements are limited. PMID:23857257
Short-interval SMS wind vector determinations for a severe local storms area
NASA Technical Reports Server (NTRS)
Peslen, C. A.
1980-01-01
Short-interval SMS-2 visible digital image data are used to derive wind vectors from cloud tracking on time-lapsed sequences of geosynchronous satellite images. The cloud tracking areas are located in the Central Plains, where on May 6, 1975 hail-producing thunderstorms occurred ahead of a well defined dry line. Cloud tracking is performed on the Goddard Space Flight Center Atmospheric and Oceanographic Information Processing System. Lower tropospheric cumulus tracers are selected with the assistance of a cloud-top height algorithm. Divergence is derived from the cloud motions using a modified Cressman (1959) objective analysis technique which is designed to organize irregularly spaced wind vectors into uniformly gridded wind fields. The results demonstrate the feasibility of using satellite-derived wind vectors and their associated divergence fields in describing the conditions preceding severe local storm development. For this case, an area of convergence appeared ahead of the dry line and coincided with the developing area of severe weather. The magnitude of the maximum convergence varied between -10 to the -5th and -10 to the -14th per sec. The number of satellite-derived wind vectors which were required to describe conditions of the low-level atmosphere was adequate before numerous cumulonimbus cells formed. This technique is limited in areas of advanced convection.
The Ehrenfest force field: Topology and consequences for the definition of an atom in a molecule.
Martín Pendás, A; Hernández-Trujillo, J
2012-10-07
The Ehrenfest force is the force acting on the electrons in a molecule due to the presence of the other electrons and the nuclei. There is an associated force field in three-dimensional space that is obtained by the integration of the corresponding Hermitian quantum force operator over the spin coordinates of all of the electrons and the space coordinates of all of the electrons but one. This paper analyzes the topology induced by this vector field and its consequences for the definition of molecular structure and of an atom in a molecule. Its phase portrait reveals: that the nuclei are attractors of the Ehrenfest force, the existence of separatrices yielding a dense partitioning of three-dimensional space into disjoint regions, and field lines connecting the attractors through these separatrices. From the numerical point of view, when the Ehrenfest force field is obtained as minus the divergence of the kinetic stress tensor, the induced topology was found to be highly sensitive to choice of gaussian basis sets at long range. Even the use of large split valence and highly uncontracted basis sets can yield spurious critical points that may alter the number of attraction basins. Nevertheless, at short distances from the nuclei, in general, the partitioning of three-dimensional space with the Ehrenfest force field coincides with that induced by the gradient field of the electron density. However, exceptions are found in molecules where the electron density yields results in conflict with chemical intuition. In these cases, the molecular graphs of the Ehrenfest force field reveal the expected atomic connectivities. This discrepancy between the definition of an atom in a molecule between the two vector fields casts some doubts on the physical meaning of the integration of Ehrenfest forces over the basins of the electron density.
Aita, Takuyo; Nishigaki, Koichi
2012-11-01
To visualize a bird's-eye view of an ensemble of mitochondrial genome sequences for various species, we recently developed a novel method of mapping a biological sequence ensemble into Three-Dimensional (3D) vector space. First, we represented a biological sequence of a species s by a word-composition vector x(s), where its length [absolute value]x(s)[absolute value] represents the sequence length, and its unit vector x(s)/[absolute value]x(s)[absolute value] represents the relative composition of the K-tuple words through the sequence and the size of the dimension, N=4(K), is the number of all possible words with the length of K. Second, we mapped the vector x(s) to the 3D position vector y(s), based on the two following simple principles: (1) [absolute value]y(s)[absolute value]=[absolute value]x(s)[absolute value] and (2) the angle between y(s) and y(t) maximally correlates with the angle between x(s) and x(t). The mitochondrial genome sequences for 311 species, including 177 Animalia, 85 Fungi and 49 Green plants, were mapped into 3D space by using K=7. The mapping was successful because the angles between vectors before and after the mapping highly correlated with each other (correlation coefficients were 0.92-0.97). Interestingly, the Animalia kingdom is distributed along a single arc belt (just like the Milky Way on a Celestial Globe), and the Fungi and Green plant kingdoms are distributed in a similar arc belt. These two arc belts intersect at their respective middle regions and form a cross structure just like a jet aircraft fuselage and its wings. This new mapping method will allow researchers to intuitively interpret the visual information presented in the maps in a highly effective manner. Copyright © 2012 Elsevier Inc. All rights reserved.
Macroscopic theory of dark sector
NASA Astrophysics Data System (ADS)
Meierovich, Boris
A simple Lagrangian with squared covariant divergence of a vector field as a kinetic term turned out an adequate tool for macroscopic description of the dark sector. The zero-mass field acts as the dark energy. Its energy-momentum tensor is a simple additive to the cosmological constant [1]. Space-like and time-like massive vector fields describe two different forms of dark matter. The space-like massive vector field is attractive. It is responsible for the observed plateau in galaxy rotation curves [2]. The time-like massive field displays repulsive elasticity. In balance with dark energy and ordinary matter it provides a four parametric diversity of regular solutions of the Einstein equations describing different possible cosmological and oscillating non-singular scenarios of evolution of the universe [3]. In particular, the singular big bang turns into a regular inflation-like transition from contraction to expansion with the accelerate expansion at late times. The fine-tuned Friedman-Robertson-Walker singular solution corresponds to the particular limiting case at the boundary of existence of regular oscillating solutions in the absence of vector fields. The simplicity of the general covariant expression for the energy-momentum tensor allows to analyse the main properties of the dark sector analytically and avoid unnecessary model assumptions. It opens a possibility to trace how the additional attraction of the space-like dark matter, dominating in the galaxy scale, transforms into the elastic repulsion of the time-like dark matter, dominating in the scale of the Universe. 1. B. E. Meierovich. "Vector fields in multidimensional cosmology". Phys. Rev. D 84, 064037 (2011). 2. B. E. Meierovich. "Galaxy rotation curves driven by massive vector fields: Key to the theory of the dark sector". Phys. Rev. D 87, 103510, (2013). 3. B. E. Meierovich. "Towards the theory of the evolution of the Universe". Phys. Rev. D 85, 123544 (2012).
USDA-ARS?s Scientific Manuscript database
A somatic transformation vector, pDP9, was constructed that provides a simplified means of producing permanently transformed cultured insect cells that support high levels of protein expression of foreign genes. The pDP9 plasmid vector incorporates DNA sequences from the Junonia coenia densovirus th...
NASA Astrophysics Data System (ADS)
Miyama, Masamichi J.; Hukushima, Koji
2018-04-01
A sparse modeling approach is proposed for analyzing scanning tunneling microscopy topography data, which contain numerous peaks originating from the electron density of surface atoms and/or impurities. The method, based on the relevance vector machine with L1 regularization and k-means clustering, enables separation of the peaks and peak center positioning with accuracy beyond the resolution of the measurement grid. The validity and efficiency of the proposed method are demonstrated using synthetic data in comparison with the conventional least-squares method. An application of the proposed method to experimental data of a metallic oxide thin-film clearly indicates the existence of defects and corresponding local lattice distortions.
Free stream capturing in fluid conservation law for moving coordinates in three dimensions
NASA Technical Reports Server (NTRS)
Obayashi, Shigeru
1991-01-01
The free-stream capturing technique for both the finite-volume (FV) and finite-difference (FD) framework is summarized. For an arbitrary motion of the grid, the FV analysis shows that volumes swept by all six surfaces of the cell have to be computed correctly. This means that the free-stream capturing time-metric terms should be calculated not only from a surface vector of a cell at a single time level, but also from a volume swept by the cell surface in space and time. The FV free-stream capturing formulation is applicable to the FD formulation by proper translation from an FV cell to an FD mesh.
A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor.
Liu, Zhenwang; Xu, Jiangtao; Wang, Xinlei; Nie, Kaiming; Jin, Weimin
2015-09-16
In order to eliminate the fixed-pattern noise (FPN) in the output image of time-delay-integration CMOS image sensor (TDI-CIS), a FPN correction method based on gray value compensation is proposed. One hundred images are first captured under uniform illumination. Then, row FPN (RFPN) and column FPN (CFPN) are estimated based on the row-mean vector and column-mean vector of all collected images, respectively. Finally, RFPN are corrected by adding the estimated RFPN gray value to the original gray values of pixels in the corresponding row, and CFPN are corrected by subtracting the estimated CFPN gray value from the original gray values of pixels in the corresponding column. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination with the proposed method, the standard-deviation of row-mean vector decreases from 5.6798 to 0.4214 LSB, and the standard-deviation of column-mean vector decreases from 15.2080 to 13.4623 LSB. Both kinds of FPN in the real images captured by TDI-CIS are eliminated effectively with the proposed method.
Cost of standard indoor ultra-low-volume space spraying as a method to control adult dengue vectors.
Ditsuwan, Thanittha; Liabsuetrakul, Tippawan; Ditsuwan, Vallop; Thammapalo, Suwich
2012-06-01
To access the costs of standard indoor ultra-low-volume (SID-ULV) space spraying for controlling dengue vectors in Thailand. Resources related to SID-ULV space spraying as a method to control dengue vectors between July and December 2009 were identified, measured and valued taking a societal perspective into consideration. Information on costs was collected from direct observations, interviews and bookkeeping records. Uncertainty of unit costs was investigated using a bootstrap technique. Costs of SID-ULV were calculated from 18 new dengue cases that covered 1492 surrounding houses. The average coverage of the SID-ULV was 64.4%. In the first round of spraying, 53% of target houses were sprayed and 44.6% in the second round, of which 69.2% and 54.7% received entire indoor space spraying. Unit costs per case, per 10 houses and per 100 m(2) were USD 705 (95% Confidence Interval CI, 539-888), 180 (95% CI, 150-212) and USD 23 (95% CI, 17-30). The majority of SID-ULV unit cost per case was attributed to productivity loss (83.9%) and recurrent costs (15.2%). The unit cost of the SID-ULV per case and per house in rural was 2.8 and 1.6 times lower than municipal area. The estimated annual cost of SID-ULV space spraying from 2005 to 2009 using healthcare perspective ranged from USD 5.3 to 10.3 million. The majority of the cost of SID-ULV space spraying was attributed to productivity loss. Potential productivity loss influences the achievement of high coverage, so well-planned SID-ULV space spraying strategies are needed to reduce costs. © 2012 Blackwell Publishing Ltd.
Torres-Valencia, Cristian A; Álvarez, Mauricio A; Orozco-Gutiérrez, Alvaro A
2014-01-01
Human emotion recognition (HER) allows the assessment of an affective state of a subject. Until recently, such emotional states were described in terms of discrete emotions, like happiness or contempt. In order to cover a high range of emotions, researchers in the field have introduced different dimensional spaces for emotion description that allow the characterization of affective states in terms of several variables or dimensions that measure distinct aspects of the emotion. One of the most common of such dimensional spaces is the bidimensional Arousal/Valence space. To the best of our knowledge, all HER systems so far have modelled independently, the dimensions in these dimensional spaces. In this paper, we study the effect of modelling the output dimensions simultaneously and show experimentally the advantages in modeling them in this way. We consider a multimodal approach by including features from the Electroencephalogram and a few physiological signals. For modelling the multiple outputs, we employ a multiple output regressor based on support vector machines. We also include an stage of feature selection that is developed within an embedded approach known as Recursive Feature Elimination (RFE), proposed initially for SVM. The results show that several features can be eliminated using the multiple output support vector regressor with RFE without affecting the performance of the regressor. From the analysis of the features selected in smaller subsets via RFE, it can be observed that the signals that are more informative into the arousal and valence space discrimination are the EEG, Electrooculogram/Electromiogram (EOG/EMG) and the Galvanic Skin Response (GSR).
Using Grid Cells for Navigation
Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil
2015-01-01
Summary Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this “vector navigation” relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. PMID:26247860
NASA Astrophysics Data System (ADS)
Wang, Haijiang; Yang, Ling
2014-12-01
In this paper, the application of vector analysis tool in the illuminated area and the Doppler frequency distribution research for the airborne pulse radar is studied. An important feature of vector analysis is that it can closely combine the geometric ideas with algebraic calculations. Through coordinate transform, the relationship between the frame of radar antenna and the ground, under aircraft motion attitude, is derived. Under the time-space analysis, the overlap area between the footprint of radar beam and the pulse-illuminated zone is obtained. Furthermore, the Doppler frequency expression is successfully deduced. In addition, the Doppler frequency distribution is plotted finally. Using the time-space analysis results, some important parameters of a specified airborne radar system are obtained. Simultaneously, the results are applied to correct the phase error brought by attitude change in airborne synthetic aperture radar (SAR) imaging.
NASA Astrophysics Data System (ADS)
Wu, Qi
2010-03-01
Demand forecasts play a crucial role in supply chain management. The future demand for a certain product is the basis for the respective replenishment systems. Aiming at demand series with small samples, seasonal character, nonlinearity, randomicity and fuzziness, the existing support vector kernel does not approach the random curve of the sales time series in the space (quadratic continuous integral space). In this paper, we present a hybrid intelligent system combining the wavelet kernel support vector machine and particle swarm optimization for demand forecasting. The results of application in car sale series forecasting show that the forecasting approach based on the hybrid PSOWv-SVM model is effective and feasible, the comparison between the method proposed in this paper and other ones is also given, which proves that this method is, for the discussed example, better than hybrid PSOv-SVM and other traditional methods.
Cosmology and accelerator tests of strongly interacting dark matter
Berlin, Asher; Blinov, Nikita; Gori, Stefania; ...
2018-03-23
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less
Liu, Gui-Geng; Wang, Ke; Lee, Yun-Han; Wang, Dan; Li, Ping-Ping; Gou, Fangwang; Li, Yongnan; Tu, Chenghou; Wu, Shin-Tson; Wang, Hui-Tian
2018-02-15
Vortex vector optical fields (VVOFs) refer to a kind of vector optical field with an azimuth-variant polarization and a helical phase, simultaneously. Such a VVOF is defined by the topological index of the polarization singularity and the topological charge of the phase vortex. We present a simple method to measure the topological charge and index of VVOFs by using a space-variant half-wave plate (SV-HWP). The geometric phase grating of the SV-HWP diffracts a VVOF into ±1 orders with orthogonally left- and right-handed circular polarizations. By inserting a polarizer behind the SV-HWP, the two circular polarization states project into the linear polarization and then interfere with each other to form the interference pattern, which enables the direct measurement of the topological charge and index of VVOFs.
Precomputed state dependent digital control of a nuclear rocket engine
NASA Technical Reports Server (NTRS)
Johnson, M. R.
1972-01-01
A control method applicable to multiple-input multiple-output nonlinear time-invariant systems in which desired behavior can be expressed explicitly as a trajectory in system state space is developed. The precomputed state dependent control method is basically a synthesis technique in which a suboptimal control law is developed off-line, prior to system operation. This law is obtained by conducting searches at a finite number of points in state space, in the vicinity of some desired trajectory, to obtain a set of constant control vectors which tend to return the system to the desired trajectory. These vectors are used to evaluate the unknown coefficients in a control law having an assumed hyperellipsoidal form. The resulting coefficients constitute the heart of the controller and are used in the on-line computation of control vectors. Two examples of PSDC are given prior to the more detailed description of the NERVA control system development.
Cosmology and accelerator tests of strongly interacting dark matter
NASA Astrophysics Data System (ADS)
Berlin, Asher; Blinov, Nikita; Gori, Stefania; Schuster, Philip; Toro, Natalia
2018-03-01
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experiments such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. We also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.
Cosmology and accelerator tests of strongly interacting dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berlin, Asher; Blinov, Nikita; Gori, Stefania
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less
Hong-Ping, Xie; Jian-Hui, Jiang; Guo-Li, Shen; Ru-Qin, Yu
2002-01-01
A new approach for estimating the chemical rank of the three-way array called the principal norm vector orthogonal projection method has been proposed. The method is based on the fact that the chemical rank of the three-way data array is equal to one of the column space of the unfolded matrix along the spectral or chromatographic mode. A vector with maximum Frobenius norm is selected among all the column vectors of the unfolded matrix as the principal norm vector (PNV). A transformation is conducted for the column vectors with an orthogonal projection matrix formulated by PNV. The mathematical rank of the column space of the residual matrix thus obtained should decrease by one. Such orthogonal projection is carried out repeatedly till the contribution of chemical species to the signal data is all deleted. At this time the decrease of the mathematical rank would equal that of the chemical rank, and the remaining residual subspace would entirely be due to the noise contribution. The chemical rank can be estimated easily by using an F-test. The method has been used successfully to the simulated HPLC-DAD type three-way data array and two real excitation-emission fluorescence data sets of amino acid mixtures and dye mixtures. The simulation with added relatively high level noise shows that the method is robust in resisting the heteroscedastic noise. The proposed algorithm is simple and easy to program with quite light computational burden.
Vector and Raster Data Storage Based on Morton Code
NASA Astrophysics Data System (ADS)
Zhou, G.; Pan, Q.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Liu, X.
2018-05-01
Even though geomatique is so developed nowadays, the integration of spatial data in vector and raster formats is still a very tricky problem in geographic information system environment. And there is still not a proper way to solve the problem. This article proposes a method to interpret vector data and raster data. In this paper, we saved the image data and building vector data of Guilin University of Technology to Oracle database. Then we use ADO interface to connect database to Visual C++ and convert row and column numbers of raster data and X Y of vector data to Morton code in Visual C++ environment. This method stores vector and raster data to Oracle Database and uses Morton code instead of row and column and X Y to mark the position information of vector and raster data. Using Morton code to mark geographic information enables storage of data make full use of storage space, simultaneous analysis of vector and raster data more efficient and visualization of vector and raster more intuitive. This method is very helpful for some situations that need to analyse or display vector data and raster data at the same time.
Optimizing interplanetary trajectories with deep space maneuvers. M.S. Thesis
NASA Technical Reports Server (NTRS)
Navagh, John
1993-01-01
Analysis of interplanetary trajectories is a crucial area for both manned and unmanned missions of the Space Exploration Initiative. A deep space maneuver (DSM) can improve a trajectory in much the same way as a planetary swingby. However, instead of using a gravitational field to alter the trajectory, the on-board propulsion system of the spacecraft is used when the vehicle is not near a planet. The purpose is to develop an algorithm to determine where and when to use deep space maneuvers to reduce the cost of a trajectory. The approach taken to solve this problem uses primer vector theory in combination with a non-linear optimizing program to minimize Delta(V). A set of necessary conditions on the primer vector is shown to indicate whether a deep space maneuver will be beneficial. Deep space maneuvers are applied to a round trip mission to Mars to determine their effect on the launch opportunities. Other studies which were performed include cycler trajectories and Mars mission abort scenarios. It was found that the software developed was able to locate quickly DSM's which lower the total Delta(V) on these trajectories.
Optimizing interplanetary trajectories with deep space maneuvers
NASA Astrophysics Data System (ADS)
Navagh, John
1993-09-01
Analysis of interplanetary trajectories is a crucial area for both manned and unmanned missions of the Space Exploration Initiative. A deep space maneuver (DSM) can improve a trajectory in much the same way as a planetary swingby. However, instead of using a gravitational field to alter the trajectory, the on-board propulsion system of the spacecraft is used when the vehicle is not near a planet. The purpose is to develop an algorithm to determine where and when to use deep space maneuvers to reduce the cost of a trajectory. The approach taken to solve this problem uses primer vector theory in combination with a non-linear optimizing program to minimize Delta(V). A set of necessary conditions on the primer vector is shown to indicate whether a deep space maneuver will be beneficial. Deep space maneuvers are applied to a round trip mission to Mars to determine their effect on the launch opportunities. Other studies which were performed include cycler trajectories and Mars mission abort scenarios. It was found that the software developed was able to locate quickly DSM's which lower the total Delta(V) on these trajectories.
INTERIM ANALYSIS OF THE CONTRIBUTION OF HIGH-LEVEL EVIDENCE FOR DENGUE VECTOR CONTROL.
Horstick, Olaf; Ranzinger, Silvia Runge
2015-01-01
This interim analysis reviews the available systematic literature for dengue vector control on three levels: 1) single and combined vector control methods, with existing work on peridomestic space spraying and on Bacillus thuringiensis israelensis; further work is available soon on the use of Temephos, Copepods and larvivorous fish; 2) or for a specific purpose, like outbreak control, and 3) on a strategic level, as for example decentralization vs centralization, with a systematic review on vector control organization. Clear best practice guidelines for methodology of entomological studies are needed. There is a need to include measuring dengue transmission data. The following recommendations emerge: Although vector control can be effective, implementation remains an issue; Single interventions are probably not useful; Combinations of interventions have mixed results; Careful implementation of vector control measures may be most important; Outbreak interventions are often applied with questionable effectiveness.
Axial vector Z‧ and anomaly cancellation
NASA Astrophysics Data System (ADS)
Ismail, Ahmed; Keung, Wai-Yee; Tsao, Kuo-Hsing; Unwin, James
2017-05-01
Whilst the prospect of new Z‧ gauge bosons with only axial couplings to the Standard Model (SM) fermions is widely discussed, examples of anomaly-free renormalisable models are lacking in the literature. We look to remedy this by constructing several motivated examples. Specifically, we consider axial vectors which couple universally to all SM fermions, as well as those which are generation-specific, leptophilic, and leptophobic. Anomaly cancellation typically requires the presence of new coloured and charged chiral fermions, and we argue that in a large class of models masses of these new states are expected to be comparable to that of the axial vector. Finally, an axial vector mediator could provide a portal between SM and hidden sector states, and we also consider the possibility that the axial vector couples to dark matter. If the dark matter relic density is set due to freeze-out via the axial vector, this strongly constrains the parameter space.
Hamiltonian indices and rational spectral densities
NASA Technical Reports Server (NTRS)
Byrnes, C. I.; Duncan, T. E.
1980-01-01
Several (global) topological properties of various spaces of linear systems, particularly symmetric, lossless, and Hamiltonian systems, and multivariable spectral densities of fixed McMillan degree are announced. The study is motivated by a result asserting that on a connected but not simply connected manifold, it is not possible to find a vector field having a sink as its only critical point. In the scalar case, this is illustrated by showing that only on the space of McMillan degree = /Cauchy index/ = n, scalar transfer functions can one define a globally convergent vector field. This result holds both in discrete-time and for the nonautonomous case. With these motivations in mind, theorems of Bochner and Fogarty are used in showing that spaces of transfer functions defined by symmetry conditions are, in fact, smooth algebraic manifolds.
NASA Technical Reports Server (NTRS)
Parker, Peter A. (Inventor)
2003-01-01
A single vector calibration system is provided which facilitates the calibration of multi-axis load cells, including wind tunnel force balances. The single vector system provides the capability to calibrate a multi-axis load cell using a single directional load, for example loading solely in the gravitational direction. The system manipulates the load cell in three-dimensional space, while keeping the uni-directional calibration load aligned. The use of a single vector calibration load reduces the set-up time for the multi-axis load combinations needed to generate a complete calibration mathematical model. The system also reduces load application inaccuracies caused by the conventional requirement to generate multiple force vectors. The simplicity of the system reduces calibration time and cost, while simultaneously increasing calibration accuracy.
Process for structural geologic analysis of topography and point data
Eliason, Jay R.; Eliason, Valerie L. C.
1987-01-01
A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.
Zhang, Jiongmin; Jia, Ke; Jia, Jinmeng; Qian, Ying
2018-04-27
Comparing and classifying functions of gene products are important in today's biomedical research. The semantic similarity derived from the Gene Ontology (GO) annotation has been regarded as one of the most widely used indicators for protein interaction. Among the various approaches proposed, those based on the vector space model are relatively simple, but their effectiveness is far from satisfying. We propose a Hierarchical Vector Space Model (HVSM) for computing semantic similarity between different genes or their products, which enhances the basic vector space model by introducing the relation between GO terms. Besides the directly annotated terms, HVSM also takes their ancestors and descendants related by "is_a" and "part_of" relations into account. Moreover, HVSM introduces the concept of a Certainty Factor to calibrate the semantic similarity based on the number of terms annotated to genes. To assess the performance of our method, we applied HVSM to Homo sapiens and Saccharomyces cerevisiae protein-protein interaction datasets. Compared with TCSS, Resnik, and other classic similarity measures, HVSM achieved significant improvement for distinguishing positive from negative protein interactions. We also tested its correlation with sequence, EC, and Pfam similarity using online tool CESSM. HVSM showed an improvement of up to 4% compared to TCSS, 8% compared to IntelliGO, 12% compared to basic VSM, 6% compared to Resnik, 8% compared to Lin, 11% compared to Jiang, 8% compared to Schlicker, and 11% compared to SimGIC using AUC scores. CESSM test showed HVSM was comparable to SimGIC, and superior to all other similarity measures in CESSM as well as TCSS. Supplementary information and the software are available at https://github.com/kejia1215/HVSM .
Fading testbed for free-space optical communications
NASA Astrophysics Data System (ADS)
Shrestha, Amita; Giggenbach, Dirk; Mustafa, Ahmad; Pacheco-Labrador, Jorge; Ramirez, Julio; Rein, Fabian
2016-10-01
Free-space optical (FSO) communication is a very attractive technology offering very high throughput without spectral regulation constraints, yet allowing small antennas (telescopes) and tap-proof communication. However, the transmitted signal has to travel through the atmosphere where it gets influenced by atmospheric turbulence, causing scintillation of the received signal. In addition, climatic effects like fogs, clouds and rain also affect the signal significantly. Moreover, FSO being a line of sight communication requires precise pointing and tracking of the telescopes, which otherwise also causes fading. To achieve error-free transmission, various mitigation techniques like aperture averaging, adaptive optics, transmitter diversity, sophisticated coding and modulation schemes are being investigated and implemented. Evaluating the performance of such systems under controlled conditions is very difficult in field trials since the atmospheric situation constantly changes, and the target scenario (e.g. on aircraft or satellites) is not easily accessible for test purposes. Therefore, with the motivation to be able to test and verify a system under laboratory conditions, DLR has developed a fading testbed that can emulate most realistic channel conditions. The main principle of the fading testbed is to control the input current of a variable optical attenuator such that it attenuates the incoming signal according to the loaded power vector. The sampling frequency and mean power of the vector can be optionally changed according to requirements. This paper provides a brief introduction to software and hardware development of the fading testbed and measurement results showing its accuracy and application scenarios.
NASA Technical Reports Server (NTRS)
Papanyan, Valeri; Oshle, Edward; Adamo, Daniel
2008-01-01
Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.
Mean template for tensor-based morphometry using deformation tensors.
Leporé, Natasha; Brun, Caroline; Pennec, Xavier; Chou, Yi-Yu; Lopez, Oscar L; Aizenstein, Howard J; Becker, James T; Toga, Arthur W; Thompson, Paul M
2007-01-01
Tensor-based morphometry (TBM) studies anatomical differences between brain images statistically, to identify regions that differ between groups, over time, or correlate with cognitive or clinical measures. Using a nonlinear registration algorithm, all images are mapped to a common space, and statistics are most commonly performed on the Jacobian determinant (local expansion factor) of the deformation fields. In, it was shown that the detection sensitivity of the standard TBM approach could be increased by using the full deformation tensors in a multivariate statistical analysis. Here we set out to improve the common space itself, by choosing the shape that minimizes a natural metric on the deformation tensors from that space to the population of control subjects. This method avoids statistical bias and should ease nonlinear registration of new subjects data to a template that is 'closest' to all subjects' anatomies. As deformation tensors are symmetric positive-definite matrices and do not form a vector space, all computations are performed in the log-Euclidean framework. The control brain B that is already the closest to 'average' is found. A gradient descent algorithm is then used to perform the minimization that iteratively deforms this template and obtains the mean shape. We apply our method to map the profile of anatomical differences in a dataset of 26 HIV/AIDS patients and 14 controls, via a log-Euclidean Hotelling's T2 test on the deformation tensors. These results are compared to the ones found using the 'best' control, B. Statistics on both shapes are evaluated using cumulative distribution functions of the p-values in maps of inter-group differences.
Multiclass Reduced-Set Support Vector Machines
NASA Technical Reports Server (NTRS)
Tang, Benyang; Mazzoni, Dominic
2006-01-01
There are well-established methods for reducing the number of support vectors in a trained binary support vector machine, often with minimal impact on accuracy. We show how reduced-set methods can be applied to multiclass SVMs made up of several binary SVMs, with significantly better results than reducing each binary SVM independently. Our approach is based on Burges' approach that constructs each reduced-set vector as the pre-image of a vector in kernel space, but we extend this by recomputing the SVM weights and bias optimally using the original SVM objective function. This leads to greater accuracy for a binary reduced-set SVM, and also allows vectors to be 'shared' between multiple binary SVMs for greater multiclass accuracy with fewer reduced-set vectors. We also propose computing pre-images using differential evolution, which we have found to be more robust than gradient descent alone. We show experimental results on a variety of problems and find that this new approach is consistently better than previous multiclass reduced-set methods, sometimes with a dramatic difference.
Perisic, Milun; Kinoshita, Michael H; Ranson, Ray M; Gallegos-Lopez, Gabriel
2014-06-03
Methods, system and apparatus are provided for controlling third harmonic voltages when operating a multi-phase machine in an overmodulation region. The multi-phase machine can be, for example, a five-phase machine in a vector controlled motor drive system that includes a five-phase PWM controlled inverter module that drives the five-phase machine. Techniques for overmodulating a reference voltage vector are provided. For example, when the reference voltage vector is determined to be within the overmodulation region, an angle of the reference voltage vector can be modified to generate a reference voltage overmodulation control angle, and a magnitude of the reference voltage vector can be modified, based on the reference voltage overmodulation control angle, to generate a modified magnitude of the reference voltage vector. By modifying the reference voltage vector, voltage command signals that control a five-phase inverter module can be optimized to increase output voltages generated by the five-phase inverter module.
Optical/Infrared Signatures for Space-Based Remote Sensing
2007-11-01
Vanderbilt et al., 1985a, 1985b]. So, first linear polarization was introduced, followed by progress toward a full vector theory of polarization ...radiance profiles taken 30 s apart in a view direction orthogonal to the velocity vector , showing considerable structure due to radiance layers in the...6 Figure 3. The northern polar region and locations of the MSX
Assessing Construct Validity Using Multidimensional Item Response Theory.
ERIC Educational Resources Information Center
Ackerman, Terry A.
The concept of a user-specified validity sector is discussed. The idea of the validity sector combines the work of M. D. Reckase (1986) and R. Shealy and W. Stout (1991). Reckase developed a methodology to represent an item in a multidimensional latent space as a vector. Item vectors are computed using multidimensional item response theory item…
On the Partitioning of Squared Euclidean Distance and Its Applications in Cluster Analysis.
ERIC Educational Resources Information Center
Carter, Randy L.; And Others
1989-01-01
The partitioning of squared Euclidean--E(sup 2)--distance between two vectors in M-dimensional space into the sum of squared lengths of vectors in mutually orthogonal subspaces is discussed. Applications to specific cluster analysis problems are provided (i.e., to design Monte Carlo studies for performance comparisons of several clustering methods…
NASA Astrophysics Data System (ADS)
Kalanov, Temur Z.
2014-03-01
A critical analysis of the foundations of standard vector calculus is proposed. The methodological basis of the analysis is the unity of formal logic and of rational dialectics. It is proved that the vector calculus is incorrect theory because: (a) it is not based on a correct methodological basis - the unity of formal logic and of rational dialectics; (b) it does not contain the correct definitions of ``movement,'' ``direction'' and ``vector'' (c) it does not take into consideration the dimensions of physical quantities (i.e., number names, denominate numbers, concrete numbers), characterizing the concept of ''physical vector,'' and, therefore, it has no natural-scientific meaning; (d) operations on ``physical vectors'' and the vector calculus propositions relating to the ''physical vectors'' are contrary to formal logic.
Trophic diversity in the evolution and community assembly of loricariid catfishes
2012-01-01
Background The Neotropical catfish family Loricariidae contains over 830 species that display extraordinary variation in jaw morphologies but nonetheless reveal little interspecific variation from a generalized diet of detritus and algae. To investigate this paradox, we collected δ13C and δ15N stable isotope signatures from 649 specimens representing 32 loricariid genera and 82 species from 19 local assemblages distributed across South America. We calculated vectors representing the distance and direction of each specimen relative to the δ15N/δ13C centroid for its local assemblage, and then examined the evolutionary diversification of loricariids across assemblage isotope niche space by regressing the mean vector for each genus in each assemblage onto a phylogeny reconstructed from osteological characters. Results Loricariids displayed a total range of δ15N assemblage centroid deviation spanning 4.9‰, which is within the tissue–diet discrimination range known for Loricariidae, indicating that they feed at a similar trophic level and that δ15N largely reflects differences in their dietary protein content. Total range of δ13C deviation spanned 7.4‰, which is less than the minimum range reported for neotropical river fish communities, suggesting that loricariids selectively assimilate a restricted subset of the full basal resource spectrum available to fishes. Phylogenetic regression of assemblage centroid-standardized vectors for δ15N and δ13C revealed that loricariid genera with allopatric distributions in disjunct river basins partition basal resources in an evolutionarily conserved manner concordant with patterns of jaw morphological specialization and with evolutionary diversification via ecological radiation. Conclusions Trophic partitioning along elemental/nutritional gradients may provide an important mechanism of dietary segregation and evolutionary diversification among loricariids and perhaps other taxonomic groups of apparently generalist detritivores and herbivores. Evolutionary patterns among the Loricariidae show a high degree of trophic niche conservatism, indicating that evolutionary lineage affiliation can be a strong predictor of how basal consumers segregate trophic niche space. PMID:22835218
Estimating normal mixture parameters from the distribution of a reduced feature vector
NASA Technical Reports Server (NTRS)
Guseman, L. F.; Peters, B. C., Jr.; Swasdee, M.
1976-01-01
A FORTRAN computer program was written and tested. The measurements consisted of 1000 randomly chosen vectors representing 1, 2, 3, 7, and 10 subclasses in equal portions. In the first experiment, the vectors are computed from the input means and covariances. In the second experiment, the vectors are 16 channel measurements. The starting covariances were constructed as if there were no correlation between separate passes. The biases obtained from each run are listed.
Noid, W. G.; Liu, Pu; Wang, Yanting; Chu, Jhih-Wei; Ayton, Gary S.; Izvekov, Sergei; Andersen, Hans C.; Voth, Gregory A.
2008-01-01
The multiscale coarse-graining (MS-CG) method [S. Izvekov and G. A. Voth, J. Phys. Chem. B 109, 2469 (2005);J. Chem. Phys. 123, 134105 (2005)] employs a variational principle to determine an interaction potential for a CG model from simulations of an atomically detailed model of the same system. The companion paper proved that, if no restrictions regarding the form of the CG interaction potential are introduced and if the equilibrium distribution of the atomistic model has been adequately sampled, then the MS-CG variational principle determines the exact many-body potential of mean force (PMF) governing the equilibrium distribution of CG sites generated by the atomistic model. In practice, though, CG force fields are not completely flexible, but only include particular types of interactions between CG sites, e.g., nonbonded forces between pairs of sites. If the CG force field depends linearly on the force field parameters, then the vector valued functions that relate the CG forces to these parameters determine a set of basis vectors that span a vector subspace of CG force fields. The companion paper introduced a distance metric for the vector space of CG force fields and proved that the MS-CG variational principle determines the CG force force field that is within that vector subspace and that is closest to the force field determined by the many-body PMF. The present paper applies the MS-CG variational principle for parametrizing molecular CG force fields and derives a linear least squares problem for the parameter set determining the optimal approximation to this many-body PMF. Linear systems of equations for these CG force field parameters are derived and analyzed in terms of equilibrium structural correlation functions. Numerical calculations for a one-site CG model of methanol and a molecular CG model of the EMIM+∕NO3− ionic liquid are provided to illustrate the method. PMID:18601325
Metrics for comparing neuronal tree shapes based on persistent homology.
Li, Yanjie; Wang, Dingkang; Ascoli, Giorgio A; Mitra, Partha; Wang, Yusu
2017-01-01
As more and more neuroanatomical data are made available through efforts such as NeuroMorpho.Org and FlyCircuit.org, the need to develop computational tools to facilitate automatic knowledge discovery from such large datasets becomes more urgent. One fundamental question is how best to compare neuron structures, for instance to organize and classify large collection of neurons. We aim to develop a flexible yet powerful framework to support comparison and classification of large collection of neuron structures efficiently. Specifically we propose to use a topological persistence-based feature vectorization framework. Existing methods to vectorize a neuron (i.e, convert a neuron to a feature vector so as to support efficient comparison and/or searching) typically rely on statistics or summaries of morphometric information, such as the average or maximum local torque angle or partition asymmetry. These simple summaries have limited power in encoding global tree structures. Based on the concept of topological persistence recently developed in the field of computational topology, we vectorize each neuron structure into a simple yet informative summary. In particular, each type of information of interest can be represented as a descriptor function defined on the neuron tree, which is then mapped to a simple persistence-signature. Our framework can encode both local and global tree structure, as well as other information of interest (electrophysiological or dynamical measures), by considering multiple descriptor functions on the neuron. The resulting persistence-based signature is potentially more informative than simple statistical summaries (such as average/mean/max) of morphometric quantities-Indeed, we show that using a certain descriptor function will give a persistence-based signature containing strictly more information than the classical Sholl analysis. At the same time, our framework retains the efficiency associated with treating neurons as points in a simple Euclidean feature space, which would be important for constructing efficient searching or indexing structures over them. We present preliminary experimental results to demonstrate the effectiveness of our persistence-based neuronal feature vectorization framework.
Metrics for comparing neuronal tree shapes based on persistent homology
Li, Yanjie; Wang, Dingkang; Ascoli, Giorgio A.; Mitra, Partha
2017-01-01
As more and more neuroanatomical data are made available through efforts such as NeuroMorpho.Org and FlyCircuit.org, the need to develop computational tools to facilitate automatic knowledge discovery from such large datasets becomes more urgent. One fundamental question is how best to compare neuron structures, for instance to organize and classify large collection of neurons. We aim to develop a flexible yet powerful framework to support comparison and classification of large collection of neuron structures efficiently. Specifically we propose to use a topological persistence-based feature vectorization framework. Existing methods to vectorize a neuron (i.e, convert a neuron to a feature vector so as to support efficient comparison and/or searching) typically rely on statistics or summaries of morphometric information, such as the average or maximum local torque angle or partition asymmetry. These simple summaries have limited power in encoding global tree structures. Based on the concept of topological persistence recently developed in the field of computational topology, we vectorize each neuron structure into a simple yet informative summary. In particular, each type of information of interest can be represented as a descriptor function defined on the neuron tree, which is then mapped to a simple persistence-signature. Our framework can encode both local and global tree structure, as well as other information of interest (electrophysiological or dynamical measures), by considering multiple descriptor functions on the neuron. The resulting persistence-based signature is potentially more informative than simple statistical summaries (such as average/mean/max) of morphometric quantities—Indeed, we show that using a certain descriptor function will give a persistence-based signature containing strictly more information than the classical Sholl analysis. At the same time, our framework retains the efficiency associated with treating neurons as points in a simple Euclidean feature space, which would be important for constructing efficient searching or indexing structures over them. We present preliminary experimental results to demonstrate the effectiveness of our persistence-based neuronal feature vectorization framework. PMID:28809960
The Alignment of the Mean Wind and Stress Vectors in the Unstable Surface Layer
NASA Astrophysics Data System (ADS)
Bernardes, M.; Dias, N. L.
2010-01-01
A significant non-alignment between the mean horizontal wind vector and the stress vector was observed for turbulence measurements both above the water surface of a large lake, and over a land surface (soybean crop). Possible causes for this discrepancy such as flow distortion, averaging times and the procedure used for extracting the turbulent fluctuations (low-pass filtering and filter widths etc.), were dismissed after a detailed analysis. Minimum averaging times always less than 30 min were established by calculating ogives, and error bounds for the turbulent stresses were derived with three different approaches, based on integral time scales (first-crossing and lag-window estimates) and on a bootstrap technique. It was found that the mean absolute value of the angle between the mean wind and stress vectors is highly related to atmospheric stability, with the non-alignment increasing distinctively with increasing instability. Given a coordinate rotation that aligns the mean wind with the x direction, this behaviour can be explained by the growth of the relative error of the u- w component with instability. As a result, under more unstable conditions the u- w and the v- w components become of the same order of magnitude, and the local stress vector gives the impression of being non-aligned with the mean wind vector. The relative error of the v- w component is large enough to make it undistinguishable from zero throughout the range of stabilities. Therefore, the standard assumptions of Monin-Obukhov similarity theory hold: it is fair to assume that the v- w stress component is actually zero, and that the non-alignment is a purely statistical effect. An analysis of the dimensionless budgets of the u- w and the v- w components confirms this interpretation, with both shear and buoyant production of u- w decreasing with increasing instability. In the v- w budget, shear production is zero by definition, while buoyancy displays very low-intensity fluctuations around zero. As local free convection is approached, the turbulence becomes effectively axisymetrical, and a practical limit seems to exist beyond which it is not possible to measure the u- w component accurately.
Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.
Sun, Shiliang; Xie, Xijiong
2016-09-01
Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.
NASA Astrophysics Data System (ADS)
Pan, Feng; Ding, Xiaoxue; Launey, Kristina D.; Draayer, J. P.
2018-06-01
A simple and effective algebraic isospin projection procedure for constructing orthonormal basis vectors of irreducible representations of O (5) ⊃OT (3) ⊗ON (2) from those in the canonical O (5) ⊃ SUΛ (2) ⊗ SUI (2) basis is outlined. The expansion coefficients are components of null space vectors of the projection matrix with four nonzero elements in each row in general. Explicit formulae for evaluating OT (3)-reduced matrix elements of O (5) generators are derived.
NASA Technical Reports Server (NTRS)
Wahba, G.
1982-01-01
Vector smoothing splines on the sphere are defined. Theoretical properties are briefly alluded to. The appropriate Hilbert space norms used in a specific meteorological application are described and justified via a duality theorem. Numerical procedures for computing the splines as well as the cross validation estimate of two smoothing parameters are given. A Monte Carlo study is described which suggests the accuracy with which upper air vorticity and divergence can be estimated using measured wind vectors from the North American radiosonde network.
The effects of vector leptoquark on the ℬb(ℬ = Λ,Σ) →ℬμ+μ- decays
NASA Astrophysics Data System (ADS)
Wang, Shuai-Wei; Huang, Jin-Shu
2016-07-01
In this paper, we have studied the baryonic semileptonic ℬb(ℬ = Λ, Σ) →ℬμ+μ- decays in the vector leptoquark model with U = (3, 3, 2/3) state. Using the parameters’ space constrained through some well-measured decay modes, such as Bs → μ+μ-, Bs -B¯s mixing and B → K∗μ+μ- decays, we show the effects of vector leptoquark state on the double lepton polarization asymmetries of ℬb(ℬ = Λ, Σ) →ℬμ+μ- decays, and find that the double lepton polarization asymmetries, except for PLL, PLN and PNL, are sensitive to the contributions of vector leptoquark model.
NASA Technical Reports Server (NTRS)
Lin, Qian; Allebach, Jan P.
1990-01-01
An adaptive vector linear minimum mean-squared error (LMMSE) filter for multichannel images with multiplicative noise is presented. It is shown theoretically that the mean-squared error in the filter output is reduced by making use of the correlation between image bands. The vector and conventional scalar LMMSE filters are applied to a three-band SIR-B SAR, and their performance is compared. Based on a mutliplicative noise model, the per-pel maximum likelihood classifier was derived. The authors extend this to the design of sequential and robust classifiers. These classifiers are also applied to the three-band SIR-B SAR image.
Direct Images, Fields of Hilbert Spaces, and Geometric Quantization
NASA Astrophysics Data System (ADS)
Lempert, László; Szőke, Róbert
2014-04-01
Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.
Wenying, Wei; Jinyu, Han; Wen, Xu
2004-01-01
The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.
Electro-gravity via geometric chrononfield
NASA Astrophysics Data System (ADS)
Suchard, Eytan H.
2017-05-01
In De Sitter / Anti De Sitter space-time and in other geometries, reference sub-manifolds from which proper time is measured along integral curves, are described as events. We introduce here a foliation with the help of a scalar field. The scalar field need not be unique but from the gradient of the scalar field, an intrinsic Reeb vector of the foliations perpendicular to the gradient vector is calculated. The Reeb vector describes the acceleration of a physical particle that moves along the integral curves that are formed by the gradient of the scalar field. The Reeb vector appears as a component of an anti-symmetric matrix which is a part of a rank-2, 2-Form. The 2-form is extended into a non-degenerate 4-form and into rank-4 matrix of a 2-form, which when multiplied by a velocity of a particle, becomes the acceleration of the particle. The matrix has one U(1) degree of freedom and an additional SU(2) degrees of freedom in two vectors that span the plane perpendicular to the gradient of the scalar field and to the Reeb vector. In total, there are U(1) x SU(2) degrees of freedom. SU(3) degrees of freedom arise from three dimensional foliations but require an additional symmetry to exist in order to have a valid covariant meaning. Matter in the Einstein Grossmann equation is replaced by the action of the acceleration field, i.e. by a geometric action which is not anticipated by the metric alone. This idea leads to a new formalism that replaces the conventional stress-energy-momentum-tensor. The formalism will be mainly developed for classical physics but will also be discussed for quantized physics based on events instead of particles. The result is that a positive charge manifests small attracting gravity and a stronger but small repelling acceleration field that repels even uncharged particles that have a rest mass. Negative charge manifests a repelling anti-gravity but also a stronger acceleration field that attracts even uncharged particles that have rest mass. Preliminary version: http://sciencedomain.org/abstract/9858
AlDahlawi, Ismail; Prasad, Dheerendra; Podgorsak, Matthew B
2017-05-01
The Gamma Knife Icon comes with an integrated cone-beam CT (CBCT) for image-guided stereotactic treatment deliveries. The CBCT can be used for defining the Leksell stereotactic space using imaging without the need for the traditional invasive frame system, and this allows also for frameless thermoplastic mask stereotactic treatments (single or fractionated) with the Gamma Knife unit. In this study, we used an in-house built marker tool to evaluate the stability of the CBCT-based stereotactic space and its agreement with the standard frame-based stereotactic space. We imaged the tool with a CT indicator box using our CT-simulator at the beginning, middle, and end of the study period (6 weeks) for determining the frame-based stereotactic space. The tool was also scanned with the Icon's CBCT on a daily basis throughout the study period, and the CBCT images were used for determining the CBCT-based stereotactic space. The coordinates of each marker were determined in each CT and CBCT scan using the Leksell GammaPlan treatment planning software. The magnitudes of vector difference between the means of each marker in frame-based and CBCT-based stereotactic space ranged from 0.21 to 0.33 mm, indicating good agreement of CBCT-based and frame-based stereotactic space definition. Scanning 4-month later showed good prolonged stability of the CBCT-based stereotactic space definition. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
A comparison of breeding and ensemble transform vectors for global ensemble generation
NASA Astrophysics Data System (ADS)
Deng, Guo; Tian, Hua; Li, Xiaoli; Chen, Jing; Gong, Jiandong; Jiao, Meiyan
2012-02-01
To compare the initial perturbation techniques using breeding vectors and ensemble transform vectors, three ensemble prediction systems using both initial perturbation methods but with different ensemble member sizes based on the spectral model T213/L31 are constructed at the National Meteorological Center, China Meteorological Administration (NMC/CMA). A series of ensemble verification scores such as forecast skill of the ensemble mean, ensemble resolution, and ensemble reliability are introduced to identify the most important attributes of ensemble forecast systems. The results indicate that the ensemble transform technique is superior to the breeding vector method in light of the evaluation of anomaly correlation coefficient (ACC), which is a deterministic character of the ensemble mean, the root-mean-square error (RMSE) and spread, which are of probabilistic attributes, and the continuous ranked probability score (CRPS) and its decomposition. The advantage of the ensemble transform approach is attributed to its orthogonality among ensemble perturbations as well as its consistence with the data assimilation system. Therefore, this study may serve as a reference for configuration of the best ensemble prediction system to be used in operation.
The application of vector concepts on two skew lines
NASA Astrophysics Data System (ADS)
Alghadari, F.; Turmudi; Herman, T.
2018-01-01
The purpose of this study is knowing how to apply vector concepts on two skew lines in three-dimensional (3D) coordinate and its utilization. Several mathematical concepts have a related function for the other, but the related between the concept of vector and 3D have not applied in learning classroom. In fact, there are studies show that female students have difficulties in learning of 3D than male. It is because of personal spatial intelligence. The relevance of vector concepts creates both learning achievement and mathematical ability of male and female students enables to be balanced. The distance like on a cube, cuboid, or pyramid whose are drawn on the rectangular coordinates of a point in space. Two coordinate points of the lines can be created a vector. The vector of two skew lines has the shortest distance and the angle. Calculating of the shortest distance is started to create two vectors as a representation of line by vector position concept, next to determining a norm-vector of two vector which was obtained by cross-product, and then to create a vector from two combination of pair-points which was passed by two skew line, the shortest distance is scalar orthogonal projection of norm-vector on a vector which is a combination of pair-points. While calculating the angle are used two vectors as a representation of line to dot-product, and the inverse of cosine is yield. The utilization of its application on mathematics learning and orthographic projection method.
HAL/S programmer's guide. [space shuttle flight software language
NASA Technical Reports Server (NTRS)
Newbold, P. M.; Hotz, R. L.
1974-01-01
HAL/S is a programming language developed to satisfy the flight software requirements for the space shuttle program. The user's guide explains pertinent language operating procedures and described the various HAL/S facilities for manipulating integer, scalar, vector, and matrix data types.
An affine projection algorithm using grouping selection of input vectors
NASA Astrophysics Data System (ADS)
Shin, JaeWook; Kong, NamWoong; Park, PooGyeon
2011-10-01
This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.
A new class of N=2 topological amplitudes
NASA Astrophysics Data System (ADS)
Antoniadis, I.; Hohenegger, S.; Narain, K. S.; Sokatchev, E.
2009-12-01
We describe a new class of N=2 topological amplitudes that compute a particular class of BPS terms in the low energy effective supergravity action. Specifically they compute the coupling F(( where F, λ and ϕ are gauge field strengths, gaugino and holomorphic vector multiplet scalars. The novel feature of these terms is that they depend both on the vector and hypermultiplet moduli. The BPS nature of these terms implies that they satisfy a holomorphicity condition with respect to vector moduli and a harmonicity condition as well as a second order differential equation with respect to hypermultiplet moduli. We study these conditions explicitly in heterotic string theory and show that they are indeed satisfied up to anomalous boundary terms in the world-sheet moduli space. We also analyze the boundary terms in the holomorphicity and harmonicity equations at a generic point in the vector and hyper moduli space. In particular we show that the obstruction to the holomorphicity arises from the one loop threshold correction to the gauge couplings and we argue that this is due to the contribution of non-holomorphic couplings to the connected graphs via elimination of the auxiliary fields.
The Local Stellar Velocity Field via Vector Spherical Harmonics
NASA Technical Reports Server (NTRS)
Makarov, V. V.; Murphy, D. W.
2007-01-01
We analyze the local field of stellar tangential velocities for a sample of 42,339 nonbinary Hipparcos stars with accurate parallaxes, using a vector spherical harmonic formalism.We derive simple relations between the parameters of the classical linear model (Ogorodnikov-Milne) of the local systemic field and low-degree terms of the general vector harmonic decomposition. Taking advantage of these relationships, we determine the solar velocity with respect to the local stars of (V(sub X), V(sub Y), V(sub Z)) = (10.5, 18.5, 7.3) +/- 0.1 km s(exp -1) not for the asymmetric drift with respect to the local standard of rest. If only stars more distant than 100 pc are considered, the peculiar solar motion is (V(sub X), V(sub Y), V(sub Z)) = (9.9, 15.6, 6.9) +/- 0.2 km s(exp -1). The adverse effects of harmonic leakage, which occurs between the reflex solar motion represented by the three electric vector harmonics in the velocity space and higher degree harmonics in the proper-motion space, are eliminated in our analysis by direct subtraction of the reflex solar velocity in its tangential components for each star...
Piezoelectrically forced vibrations of electroded doubly rotated quartz plates by state space method
NASA Technical Reports Server (NTRS)
Chander, R.
1990-01-01
The purpose of this investigation is to develop an analytical method to study the vibration characteristics of piezoelectrically forced quartz plates. The procedure can be summarized as follows. The three dimensional governing equations of piezoelectricity, the constitutive equations and the strain-displacement relationships are used in deriving the final equations. For this purpose, a state vector consisting of stresses and displacements are chosen and the above equations are manipulated to obtain the projection of the derivative of the state vector with respect to the thickness coordinate on to the state vector itself. The solution to the state vector at any plane is then easily obtained in a closed form in terms of the state vector quantities at a reference plane. To simplify the analysis, simple thickness mode and plane strain approximations are used.
The Cauchy-Schwarz Inequality and the Induced Metrics on Real Vector Spaces Mainly on the Real Line
ERIC Educational Resources Information Center
Ramasinghe, W.
2005-01-01
It is very well known that the Cauchy-Schwarz inequality is an important property shared by all inner product spaces and the inner product induces a norm on the space. A proof of the Cauchy-Schwarz inequality for real inner product spaces exists, which does not employ the homogeneous property of the inner product. However, it is shown that a real…
Using the Logarithm of Odds to Define a Vector Space on Probabilistic Atlases
Pohl, Kilian M.; Fisher, John; Bouix, Sylvain; Shenton, Martha; McCarley, Robert W.; Grimson, W. Eric L.; Kikinis, Ron; Wells, William M.
2007-01-01
The Logarithm of the Odds ratio (LogOdds) is frequently used in areas such as artificial neural networks, economics, and biology, as an alternative representation of probabilities. Here, we use LogOdds to place probabilistic atlases in a linear vector space. This representation has several useful properties for medical imaging. For example, it not only encodes the shape of multiple anatomical structures but also captures some information concerning uncertainty. We demonstrate that the resulting vector space operations of addition and scalar multiplication have natural probabilistic interpretations. We discuss several examples for placing label maps into the space of LogOdds. First, we relate signed distance maps, a widely used implicit shape representation, to LogOdds and compare it to an alternative that is based on smoothing by spatial Gaussians. We find that the LogOdds approach better preserves shapes in a complex multiple object setting. In the second example, we capture the uncertainty of boundary locations by mapping multiple label maps of the same object into the LogOdds space. Third, we define a framework for non-convex interpolations among atlases that capture different time points in the aging process of a population. We evaluate the accuracy of our representation by generating a deformable shape atlas that captures the variations of anatomical shapes across a population. The deformable atlas is the result of a principal component analysis within the LogOdds space. This atlas is integrated into an existing segmentation approach for MR images. We compare the performance of the resulting implementation in segmenting 20 test cases to a similar approach that uses a more standard shape model that is based on signed distance maps. On this data set, the Bayesian classification model with our new representation outperformed the other approaches in segmenting subcortical structures. PMID:17698403
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balakin, Alexander B.; Popov, Vladimir A., E-mail: alexander.balakin@kpfu.ru, E-mail: vladipopov@mail.ru
In the framework of the Einstein-aether theory we consider a cosmological model, which describes the evolution of the unit dynamic vector field with activated rotational degree of freedom. We discuss exact solutions of the Einstein-aether theory, for which the space-time is of the Gödel-type, the velocity four-vector of the aether motion is characterized by a non-vanishing vorticity, thus the rotational vectorial modes can be associated with the source of the universe rotation. The main goal of our paper is to study the motion of test relativistic particles with a vectorial internal degree of freedom (spin or polarization), which is coupledmore » to the unit dynamic vector field. The particles are considered as the test ones in the given space-time background of the Gödel-type; the spin (polarization) coupling to the unit dynamic vector field is modeled using exact solutions of three types. The first exact solution describes the aether with arbitrary Jacobson's coupling constants; the second one relates to the case, when the Jacobson's constant responsible for the vorticity is vanishing; the third exact solution is obtained using three constraints for the coupling constants. The analysis of the exact expressions, which are obtained for the particle momentum and for the spin (polarization) four-vector components, shows that the interaction of the spin (polarization) with the unit vector field induces a rotation, which is additional to the geodesic precession of the spin (polarization) associated with the universe rotation as a whole.« less
Bioelectrical impedance vector distribution in the first year of life.
Savino, Francesco; Grasso, Giulia; Cresi, Francesco; Oggero, Roberto; Silvestro, Leandra
2003-06-01
We assessed the bioelectrical impedance vector distribution in a sample of healthy infants in the first year of life, which is not available in literature. The study was conducted as a cross-sectional study in 153 healthy Caucasian infants (90 male and 63 female) younger than 1 y, born at full term, adequate for gestational age, free from chronic diseases or growth problems, and not feverish. Z scores for weight, length, cranial circumference, and body mass index for the study population were within the range of +/-1.5 standard deviations according to the Euro-Growth Study references. Concurrent anthropometrics (weight, length, and cranial circumference), body mass index, and bioelectrical impedance (resistance and reactance) measurements were made by the same operator. Whole-body (hand to foot) tetrapolar measurements were performed with a single-frequency (50 kHz), phase-sensitive impedance analyzer. The study population was subdivided into three classes of age for statistical analysis: 0 to 3.99 mo, 4 to 7.99 mo, and 8 to 11.99 mo. Using the bivariate normal distribution of resistance and reactance components standardized by the infant's length, the bivariate 95% confidence limits for the mean impedance vector separated by sex and age groups were calculated and plotted. Further, the bivariate 95%, 75%, and 50% tolerance intervals for individual vector measurements in the first year of life were plotted. Resistance and reactance values often fluctuated during the first year of life, particularly as raw measurements (without normalization by subject's length). However, 95% confidence ellipses of mean vectors from the three age groups overlapped each other, as did confidence ellipses by sex for each age class, indicating no significant vector migration during the first year of life. We obtained an estimate of mean impedance vector in a sample of healthy infants in the first year of life and calculated the bivariate values for an individual vector (95%, 75%, and 50% tolerance ellipses).
Palaniyandi, M
2012-12-01
There have been several attempts made to the appreciation of remote sensing and GIS for the study of vectors, biodiversity, vector presence, vector abundance and the vector-borne diseases with respect to space and time. This study was made for reviewing and appraising the potential use of remote sensing and GIS applications for spatial prediction of vector-borne diseases transmission. The nature of the presence and the abundance of vectors and vector-borne diseases, disease infection and the disease transmission are not ubiquitous and are confined with geographical, environmental and climatic factors, and are localized. The presence of vectors and vector-borne diseases is most complex in nature, however, it is confined and fueled by the geographical, climatic and environmental factors including man-made factors. The usefulness of the present day availability of the information derived from the satellite data including vegetation indices of canopy cover and its density, soil types, soil moisture, soil texture, soil depth, etc. is integrating the information in the expert GIS engine for the spatial analysis of other geoclimatic and geoenvironmental variables. The present study gives the detailed information on the classical studies of the past and present, and the future role of remote sensing and GIS for the vector-borne diseases control. The ecological modeling directly gives us the relevant information to understand the spatial variation of the vector biodiversity, vector presence, vector abundance and the vector-borne diseases in association with geoclimatic and the environmental variables. The probability map of the geographical distribution and seasonal variations of horizontal and vertical distribution of vector abundance and its association with vector -borne diseases can be obtained with low cost remote sensing and GIS tool with reliable data and speed.
NASA Astrophysics Data System (ADS)
Dixon, W. G.
1982-11-01
Preface; 1. The physics of space and time; 2. Affine spaces in mathematics and physics; 3. Foundations of dynamics; 4. Relativistic simple fluids; 5. Electrodynamics of polarisable fluids; Appendix: Vector and dyadic notation in three dimensions; Publications referred to in the text; Summary and index of symbols and conventions; Subject index.
Plant Seeds as Model Vectors for the Transfer of Life Through Space
NASA Astrophysics Data System (ADS)
Tepfer, David; Leach, Sydney
2006-12-01
We consider plant seeds as terrestrial models for a vectored life form that could protect biological information in space. Seeds consist of maternal tissue surrounding and protecting an embryo. Some seeds resist deleterious conditions found in space: ultra low vacuum, extreme temperatures and radiation, including intense UV light. In a receptive environment, seeds could liberate a viable embryo, viable higher cells or a viable free-living organism (an endosymbiont or endophyte). Even if viability is lost, seeds still contain functional macro and small molecules (DNA, RNA, proteins, amino acids, lipids, etc.) that could provide the chemical basis for starting or modifying life. The possible release of endophytes or endosymbionts from a seed-like space traveler suggests that multiple domains of life, defined in DNA sequence phylogenies, could be disseminated simultaneously from Earth. We consider the possibility of exospermia, the outward transfer of life, as well as introspermia, the inward transfer of life-both as a contemporary and ancient events.
Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.
Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng
2018-01-01
In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.
NASA Astrophysics Data System (ADS)
Feigin, A. M.; Mukhin, D.; Volodin, E. M.; Gavrilov, A.; Loskutov, E. M.
2013-12-01
The new method of decomposition of the Earth's climate system into well separated spatial-temporal patterns ('climatic modes') is discussed. The method is based on: (i) generalization of the MSSA (Multichannel Singular Spectral Analysis) [1] for expanding vector (space-distributed) time series in basis of spatial-temporal empirical orthogonal functions (STEOF), which makes allowance delayed correlations of the processes recorded in spatially separated points; (ii) expanding both real SST data, and longer by several times SST data generated numerically, in STEOF basis; (iii) use of the numerically produced STEOF basis for exclusion of 'too slow' (and thus not represented correctly) processes from real data. The application of the method allows by means of vector time series generated numerically by the INM RAS Coupled Climate Model [2] to separate from real SST anomalies data [3] two climatic modes possessing by noticeably different time scales: 3-5 and 9-11 years. Relations of separated modes to ENSO and PDO are investigated. Possible applications of spatial-temporal climatic patterns concept to prognosis of climate system evolution is discussed. 1. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 2. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm 3. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/
Mukhopadhyay, J; Ghosh, K; Rangel, E F; Munstermann, L E
1998-12-01
The phlebotomine sand fly Lutzomyia longipalpis is the insect vector of visceral leishmaniasis, a protozoan disease of increasing incidence and distribution in Central and South America. Electrophoretic allele frequencies of 15 enzyme loci were compared among the L. longipalpis populations selected across its distribution range in Brazil. The mean heterozygosity of two colonized geographic strains (one each from Colombia and Brazil) were 6% and 13% respectively, with 1.6-1.9 alleles detected per locus. In contrast, among the seven widely separated field populations, the mean heterozygosity ranged from 11% to 16% with 2.1-2.9 alleles per locus. No locus was recovered that was diagnostic for any of the field populations. Allelic frequency differences among five field strains from the Amazon basin and eastern coastal Brazil were very low, with Nei's genetic distances of less than 0.01 separating them. The two inland and southerly samples from Minas Gerais (Lapinha) and Bahia (Jacobina) states were more distinctive with genetic distances of 0.024-0.038 and 0.038-0.059, respectively, when compared with the five other samples. These differences were the consequence of several high frequency alleles (glycerol-3-phosphate dehydrogenase [Gpd1.69] and phosphoglucomutase [Pgm1.69]) relatively uncommon in other strains. The low genetic distances, absence of diagnostic loci, and the distribution of genes in geographic space indicate L. longipalpis of Brazil to be a single, but genetically heterogeneous, polymorphic species.
Magnetism in curved geometries
NASA Astrophysics Data System (ADS)
Streubel, Robert
Deterministically bending and twisting two-dimensional structures in the three-dimensional (3D) space provide means to modify conventional or to launch novel functionalities by tailoring curvature and 3D shape. The recent developments of 3D curved magnetic geometries, ranging from theoretical predictions over fabrication to characterization using integral means as well as advanced magnetic tomography, will be reviewed. Theoretical works predict a curvature-induced effective anisotropy and effective Dzyaloshinskii-Moriya interaction resulting in a vast of novel effects including magnetochiral effects (chirality symmetry breaking) and topologically induced magnetization patterning. The remarkable development of nanotechnology, e.g. preparation of high-quality extended thin films, nanowires and frameworks via chemical and physical deposition as well as 3D nano printing, has granted first insights into the fundamental properties of 3D shaped magnetic objects. Optimizing magnetic and structural properties of these novel 3D architectures demands new investigation methods, particularly those based on vector tomographic imaging. Magnetic neutron tomography and electron-based 3D imaging, such as electron holography and vector field electron tomography, are well-established techniques to investigate macroscopic and nanoscopic samples, respectively. At the mesoscale, the curved objects can be investigated using the novel method of magnetic X-ray tomography. In spite of experimental challenges to address the appealing theoretical predictions of curvature-induced effects, those 3D magnetic architectures have already proven their application potential for life sciences, targeted delivery, realization of 3D spin-wave filters, and magneto-encephalography devices, to name just a few. DOE BES MSED (DE-AC02-05-CH11231).
2002-04-01
Using the Solar Vector Magnetograph, a solar observation facility at NASA's Marshall Space Flight Center (MSFC), scientists from the National Space Science and Technology Center (NSSTC) in Huntsville, Alabama, are monitoring the explosive potential of magnetic areas of the Sun. This effort could someday lead to better prediction of severe space weather, a phenomenon that occurs when blasts of particles and magnetic fields from the Sun impact the magnetosphere, the magnetic bubble around the Earth. When massive solar explosions, known as coronal mass ejections, blast through the Sun's outer atmosphere and plow toward Earth at speeds of thousands of miles per second, the resulting effects can be harmful to communication satellites and astronauts outside the Earth's magnetosphere. Like severe weather on Earth, severe space weather can be costly. On the ground, magnetic storms wrought by these solar particles can knock out electric power. Photographed are a group of contributing researchers in front of the Solar Vector Magnetograph at MSFC. The researchers are part of NSSTC's solar physics group, which develops instruments for measuring magnetic fields on the Sun. With these instruments, the group studies the origin, structure, and evolution of the solar magnetic fields and the impact they have on Earth's space environment.
Thrust vectoring for lateral-directional stability
NASA Technical Reports Server (NTRS)
Peron, Lee R.; Carpenter, Thomas
1992-01-01
The advantages and disadvantages of using thrust vectoring for lateral-directional control and the effects of reducing the tail size of a single-engine aircraft were investigated. The aerodynamic characteristics of the F-16 aircraft were generated by using the Aerodynamic Preliminary Analysis System II panel code. The resulting lateral-directional linear perturbation analysis of a modified F-16 aircraft with various tail sizes and yaw vectoring was performed at several speeds and altitudes to determine the stability and control trends for the aircraft compared to these trends for a baseline aircraft. A study of the paddle-type turning vane thrust vectoring control system as used on the National Aeronautics and Space Administration F/A-18 High Alpha Research Vehicle is also presented.
Turbulent fluid motion 2: Scalars, vectors, and tensors
NASA Technical Reports Server (NTRS)
Deissler, Robert G.
1991-01-01
The author shows that the sum or difference of two vectors is a vector. Similarly the sum of any two tensors of the same order is a tensor of that order. No meaning is attached to the sum of tensors of different orders, say u(sub i) + u(sub ij); that is not a tensor. In general, an equation containing tensors has meaning only if all the terms in the equation are tensors of the same order, and if the same unrepeated subscripts appear in all the terms. These facts will be used in obtaining appropriate equations for fluid turbulence. With the foregoing background, the derivation of appropriate continuum equations for turbulence should be straightforward.
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design
Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco
2016-01-01
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.
Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco
2016-11-23
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.
A static investigation of yaw vectoring concepts on two-dimensional convergent-divergent nozzles
NASA Technical Reports Server (NTRS)
Berrier, B. L.; Mason, M. L.
1983-01-01
The flow-turning capability and nozzle internal performance of yaw-vectoring nozzle geometries were tested in the NASA Langley 16-ft Transonic wind tunnel. The concept was investigated as a means of enhancing fighter jet performance. Five two-dimensional convergent-divergent nozzles were equipped for yaw-vectoring and examined. The configurations included a translating left sidewall, left and right sidewall flaps downstream of the nozzle throat, left sidewall flaps or port located upstream of the nozzle throat, and a powered rudder. Trials were also run with 20 deg of pitch thrust vectoring added. The feasibility of providing yaw-thrust vectoring was demonstrated, with the largest yaw vector angles being obtained with sidewall flaps downstream of the nozzle primary throat. It was concluded that yaw vector designs that scoop or capture internal nozzle flow provide the largest yaw-vector capability, but decrease the thrust the most.
Discontinuous finite element method for vector radiative transfer
NASA Astrophysics Data System (ADS)
Wang, Cun-Hai; Yi, Hong-Liang; Tan, He-Ping
2017-03-01
The discontinuous finite element method (DFEM) is applied to solve the vector radiative transfer in participating media. The derivation in a discrete form of the vector radiation governing equations is presented, in which the angular space is discretized by the discrete-ordinates approach with a local refined modification, and the spatial domain is discretized into finite non-overlapped discontinuous elements. The elements in the whole solution domain are connected by modelling the boundary numerical flux between adjacent elements, which makes the DFEM numerically stable for solving radiative transfer equations. Several various problems of vector radiative transfer are tested to verify the performance of the developed DFEM, including vector radiative transfer in a one-dimensional parallel slab containing a Mie/Rayleigh/strong forward scattering medium and a two-dimensional square medium. The fact that DFEM results agree very well with the benchmark solutions in published references shows that the developed DFEM in this paper is accurate and effective for solving vector radiative transfer problems.
Confidence regions of planar cardiac vectors
NASA Technical Reports Server (NTRS)
Dubin, S.; Herr, A.; Hunt, P.
1980-01-01
A method for plotting the confidence regions of vectorial data obtained in electrocardiology is presented. The 90%, 95% and 99% confidence regions of cardiac vectors represented in a plane are obtained in the form of an ellipse centered at coordinates corresponding to the means of a sample selected at random from a bivariate normal distribution. An example of such a plot for the frontal plane QRS mean electrical axis for 80 horses is also presented.
An implementation of the QMR method based on coupled two-term recurrences
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noeel M.
1992-01-01
The authors have proposed a new Krylov subspace iteration, the quasi-minimal residual algorithm (QMR), for solving non-Hermitian linear systems. In the original implementation of the QMR method, the Lanczos process with look-ahead is used to generate basis vectors for the underlying Krylov subspaces. In the Lanczos algorithm, these basis vectors are computed by means of three-term recurrences. It has been observed that, in finite precision arithmetic, vector iterations based on three-term recursions are usually less robust than mathematically equivalent coupled two-term vector recurrences. This paper presents a look-ahead algorithm that constructs the Lanczos basis vectors by means of coupled two-term recursions. Implementation details are given, and the look-ahead strategy is described. A new implementation of the QMR method, based on this coupled two-term algorithm, is described. A simplified version of the QMR algorithm without look-ahead is also presented, and the special case of QMR for complex symmetric linear systems is considered. Results of numerical experiments comparing the original and the new implementations of the QMR method are reported.
Displacement field for an edge dislocation in a layered half-space
Savage, J.C.
1998-01-01
The displacement field for an edge dislocation in an Earth model consisting of a layer welded to a half-space of different material is found in the form of a Fourier integral following the method given by Weeks et al. [1968]. There are four elementary solutions to be considered: the dislocation is either in the half-space or the layer and the Burgers vector is either parallel or perpendicular to the layer. A general two-dimensional solution for a dip-slip faulting or dike injection (arbitrary dip) can be constructed from a superposition of these elementary solutions. Surface deformations have been calculated for an edge dislocation located at the interface with Burgers vector inclined 0??, 30??, 60??, and 90?? to the interface for the case where the rigidity of the layer is half of that of the half-space and the Poisson ratios are the same. Those displacement fields have been compared to the displacement fields generated by similarly situated edge dislocations in a uniform half-space. The surface displacement field produced by the edge dislocation in the layered half-space is very similar to that produced by an edge dislocation at a different depth in a uniform half-space. In general, a low-modulus (high-modulus) layer causes the half-space equivalent dislocation to appear shallower (deeper) than the actual dislocation in the layered half-space.
Karunamoorthi, Kaliyaperumal; Sabesan, Shanmugavelu
2009-12-01
A laboratory study was carried out to evaluate the relative efficacy of N-N-diethyl-m-toluamide (DEET)- and N,N-diethyl phenylacetamide (DEPA)-treated wristbands against three major vector mosquitoes viz., Anopheles stephensi Liston, Culex quinquefasciatus Say and Aedes aegypti (L.), at two different concentrations viz., 1.5 and 2.0 mg/cm(2). Overall, both DEET and DEPA have shown various degrees of repellency impact against all three vector mosquitoes. DEET offered the highest 317.0 min mean complete protection against An. stephensi and DEPA provided 275.6 min complete protection to Cx. quinquefasciatus at 2.0 mg/cm(2). However, DEPA-treated wristbands did not show any significant differences in terms of reduction of human landing rate and mean complete protection time against An. stephensi and Ae. aegypti between 1.5 and 2.0 mg/cm(2). DEET demonstrated relatively higher repellency impact to vector mosquitoes than DEPA. However, χ(2) analysis revealed that there was no statistically significant difference found in repellent efficiency between DEET and DEPA (P = 0.924). The present study result suggests that repellent-treated wristbands could serve as a means of potential personal protection expedient to avoid insect's annoyance and reduce vector-borne disease transmission. They are extremely valuable whenever and wherever other kinds of personal protection measures are unfeasible.
Tanga, M C; Ngundu, W I; Judith, N; Mbuh, J; Tendongfor, N; Simard, Frédéric; Wanji, S
2010-07-01
An entomological survey was conducted in Cameroon between October 2004 and September 2005, in nine localities targeted for malaria vector control based on adult productivity and variability. Mosquitoes were collected by human-landing catches (HLCs) and pyrethrum spray catches. A total of 12 500 anophelines were collected and dissected: Anopheles gambiae s.l. (56.86%), An. funestus s.l. (32.57%), An. hancocki (9.38%), and An. nili (1.18%). Applying PCR revealed that specimens of the An. funestus group were An. funestus s.s. and An. gambiae complex were mostly An. melas and An. gambiae s.s. of the M and S molecular forms with the M forms being the most predominant. The natural distribution patterns of Anopheles species were largely determined by altitude with some species having unique environmental tolerance limits. A human blood index (HBI) of 99.05% was recorded. Mean probability of daily survival of the malaria vectors was 0.92, with annual mean life expectancy of 21.9 days and the expectation of infective life was long with a mean of 7.4 days. The high survival rates suggest a high vector potential for the species. This information enhances the development of a more focused and informed vector control intervention. Copyright 2010 Royal Society of Tropical Medicine and Hygiene. Published by Elsevier Ltd. All rights reserved.
Student Solution Manual for Essential Mathematical Methods for the Physical Sciences
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2011-02-01
1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics.
Essential Mathematical Methods for the Physical Sciences
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2011-02-01
1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics; Appendices; Index.
Vector Fluxgate Magnetometer (VMAG) Development for DSX
2008-05-19
AFRL-RV-HA-TR-2008-1108 Vector Fluxgate Magnetometer (VMAG) Development for DSX Mark B. Moldwin Q. O O O I- UCLA Q Institute of...for Public Release; Distribution Unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT UCLA is building a three-axis fluxgate magnetometer for the Air... fluxgate magnetometer provides the necessary data to support both the Space Weather (SWx) specification and mapping requirements and the WPIx
Using Grid Cells for Navigation.
Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil
2015-08-05
Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this "vector navigation" relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Cross-entropy embedding of high-dimensional data using the neural gas model.
Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi
2005-01-01
A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).
Remote sensing of surface currents with single shipborne high-frequency surface wave radar
NASA Astrophysics Data System (ADS)
Wang, Zhongbao; Xie, Junhao; Ji, Zhenyuan; Quan, Taifan
2016-01-01
High-frequency surface wave radar (HFSWR) is a useful technology for remote sensing of surface currents. It usually requires two (or more) stations spaced apart to create a two-dimensional (2D) current vector field. However, this method can only obtain the measurements within the overlapping coverage, which wastes most of the data from only one radar observation. Furthermore, it increases observation's costs significantly. To reduce the number of required radars and increase the ocean area that can be measured, this paper proposes an economical methodology for remote sensing of the 2D surface current vector field using single shipborne HFSWR. The methodology contains two parts: (1) a real space-time multiple signal classification (MUSIC) based on sparse representation and unitary transformation techniques is developed for measuring the radial currents from the spreading first-order spectra, and (2) the stream function method is introduced to obtain the 2D surface current vector field. Some important conclusions are drawn, and simulations are included to validate the correctness of them.
The design of transfer trajectory for Ivar asteroid exploration mission
NASA Astrophysics Data System (ADS)
Qiao, Dong; Cui, Hutao; Cui, Pingyuan
2009-12-01
An impending demand for exploring the small bodies, such as the comets and the asteroids, envisioned the Chinese Deep Space exploration mission to the Near Earth asteroid Ivar. A design and optimal method of transfer trajectory for asteroid Ivar is discussed in this paper. The transfer trajectory for rendezvous with asteroid Ivar is designed by means of Earth gravity assist with deep space maneuver (Delta-VEGA) technology. A Delta-VEGA transfer trajectory is realized by several trajectory segments, which connect the deep space maneuver and swingby point. Each trajectory segment is found by solving Lambert problem. Through adjusting deep maneuver and arrival time, the match condition of swingby is satisfied. To reduce the total mission velocity increments further, a procedure is developed which minimizes total velocity increments for this scheme of transfer trajectory for asteroid Ivar. The trajectory optimization problem is solved with a quasi-Newton algorithm utilizing analytic first derivatives, which are derived from the transversality conditions associated with the optimization formulation and primer vector theory. The simulation results show the scheme for transfer trajectory causes C3 and total velocity increments decrease of 48.80% and 13.20%, respectively.
A simple map-based localization strategy using range measurements
NASA Astrophysics Data System (ADS)
Moore, Kevin L.; Kutiyanawala, Aliasgar; Chandrasekharan, Madhumita
2005-05-01
In this paper we present a map-based approach to localization. We consider indoor navigation in known environments based on the idea of a "vector cloud" by observing that any point in a building has an associated vector defining its distance to the key structural components (e.g., walls, ceilings, etc.) of the building in any direction. Given a building blueprint we can derive the "ideal" vector cloud at any point in space. Then, given measurements from sensors on the robot we can compare the measured vector cloud to the possible vector clouds cataloged from the blueprint, thus determining location. We present algorithms for implementing this approach to localization, using the Hamming norm, the 1-norm, and the 2-norm. The effectiveness of the approach is verified by experiments on a 2-D testbed using a mobile robot with a 360° laser range-finder and through simulation analysis of robustness.
Tensor Sparse Coding for Positive Definite Matrices.
Sivalingam, Ravishankar; Boley, Daniel; Morellas, Vassilios; Papanikolopoulos, Nikos
2013-08-02
In recent years, there has been extensive research on sparse representation of vector-valued signals. In the matrix case, the data points are merely vectorized and treated as vectors thereafter (for e.g., image patches). However, this approach cannot be used for all matrices, as it may destroy the inherent structure of the data. Symmetric positive definite (SPD) matrices constitute one such class of signals, where their implicit structure of positive eigenvalues is lost upon vectorization. This paper proposes a novel sparse coding technique for positive definite matrices, which respects the structure of the Riemannian manifold and preserves the positivity of their eigenvalues, without resorting to vectorization. Synthetic and real-world computer vision experiments with region covariance descriptors demonstrate the need for and the applicability of the new sparse coding model. This work serves to bridge the gap between the sparse modeling paradigm and the space of positive definite matrices.
Tensor sparse coding for positive definite matrices.
Sivalingam, Ravishankar; Boley, Daniel; Morellas, Vassilios; Papanikolopoulos, Nikolaos
2014-03-01
In recent years, there has been extensive research on sparse representation of vector-valued signals. In the matrix case, the data points are merely vectorized and treated as vectors thereafter (for example, image patches). However, this approach cannot be used for all matrices, as it may destroy the inherent structure of the data. Symmetric positive definite (SPD) matrices constitute one such class of signals, where their implicit structure of positive eigenvalues is lost upon vectorization. This paper proposes a novel sparse coding technique for positive definite matrices, which respects the structure of the Riemannian manifold and preserves the positivity of their eigenvalues, without resorting to vectorization. Synthetic and real-world computer vision experiments with region covariance descriptors demonstrate the need for and the applicability of the new sparse coding model. This work serves to bridge the gap between the sparse modeling paradigm and the space of positive definite matrices.
Gu, Bing; Xu, Danfeng; Pan, Yang; Cui, Yiping
2014-07-01
Based on the vectorial Rayleigh-Sommerfeld integrals, the analytical expressions for azimuthal-variant vector fields diffracted by an annular aperture are presented. This helps us to investigate the propagation behaviors and the focusing properties of apertured azimuthal-variant vector fields under nonparaxial and paraxial approximations. The diffraction by a circular aperture, a circular disk, or propagation in free space can be treated as special cases of this general result. Simulation results show that the transverse intensity, longitudinal intensity, and far-field divergence angle of nonparaxially apertured azimuthal-variant vector fields depend strongly on the azimuthal index, the outer truncation parameter and the inner truncation parameter of the annular aperture, as well as the ratio of the waist width to the wavelength. Moreover, the multiple-ring-structured intensity pattern of the focused azimuthal-variant vector field, which originates from the diffraction effect caused by an annular aperture, is experimentally demonstrated.
Ladder operators for the Klein-Gordon equation with a scalar curvature term
NASA Astrophysics Data System (ADS)
Mück, Wolfgang
2018-01-01
Recently, Cardoso, Houri and Kimura constructed generalized ladder operators for massive Klein-Gordon scalar fields in space-times with conformal symmetry. Their construction requires a closed conformal Killing vector, which is also an eigenvector of the Ricci tensor. Here, a similar procedure is used to construct generalized ladder operators for the Klein-Gordon equation with a scalar curvature term. It is proven that a ladder operator requires the existence of a conformal Killing vector, which must satisfy an additional property. This property is necessary and sufficient for the construction of a ladder operator. For maximally symmetric space-times, the results are equivalent to those of Cardoso, Houri and Kimura.
NASA Technical Reports Server (NTRS)
Buchholz, Peter; Ciardo, Gianfranco; Donatelli, Susanna; Kemper, Peter
1997-01-01
We present a systematic discussion of algorithms to multiply a vector by a matrix expressed as the Kronecker product of sparse matrices, extending previous work in a unified notational framework. Then, we use our results to define new algorithms for the solution of large structured Markov models. In addition to a comprehensive overview of existing approaches, we give new results with respect to: (1) managing certain types of state-dependent behavior without incurring extra cost; (2) supporting both Jacobi-style and Gauss-Seidel-style methods by appropriate multiplication algorithms; (3) speeding up algorithms that consider probability vectors of size equal to the "actual" state space instead of the "potential" state space.
NASA Astrophysics Data System (ADS)
Hashemi, H.; Tax, D. M. J.; Duin, R. P. W.; Javaherian, A.; de Groot, P.
2008-11-01
Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a statistical feature ranking technique and combining different classifiers. The method, which has general applicability, is demonstrated here on a gas chimney detection problem. First, we evaluate a set of input seismic attributes extracted at locations labeled by a human expert using regularized discriminant analysis (RDA). In order to find the RDA score for each seismic attribute, forward and backward search strategies are used. Subsequently, two non-linear classifiers: multilayer perceptron (MLP) and support vector classifier (SVC) are run on the ranked seismic attributes. Finally, to capitalize on the intrinsic differences between both classifiers, the MLP and SVC results are combined using logical rules of maximum, minimum and mean. The proposed method optimizes the ranked feature space size and yields the lowest classification error in the final combined result. We will show that the logical minimum reveals gas chimneys that exhibit both the softness of MLP and the resolution of SVC classifiers.
Color measurement and discrimination
NASA Technical Reports Server (NTRS)
Wandell, B. A.
1985-01-01
Theories of color measurement attempt to provide a quantative means for predicting whether two lights will be discriminable to an average observer. All color measurement theories can be characterized as follows: suppose lights a and b evoke responses from three color channels characterized as vectors, v(a) and v(b); the vector difference v(a) - v(b) corresponds to a set of channel responses that would be generated by some real light, call it *. According to theory a and b will be discriminable when * is detectable. A detailed development and test of the classic color measurement approach are reported. In the absence of a luminance component in the test stimuli, a and b, the theory holds well. In the presence of a luminance component, the theory is clearly false. When a luminance component is present discrimination judgements depend largely on whether the lights being discriminated fall in separate, categorical regions of color space. The results suggest that sensory estimation of surface color uses different methods, and the choice of method depends upon properties of the image. When there is significant luminance variation a categorical method is used, while in the absence of significant luminance variation judgments are continuous and consistant with the measurement approach.
Hamiltonian dynamics of vortex and magnetic lines in hydrodynamic type systems
NASA Astrophysics Data System (ADS)
Kuznetsov, E. A.; Ruban, V. P.
2000-01-01
Vortex line and magnetic line representations are introduced for a description of flows in ideal hydrodynamics and magnetohydrodynamics (MHD), respectively. For incompressible fluids, it is shown with the help of this transformation that the equations of motion for vorticity Ω and magnetic field follow from a variational principle. By means of this representation, it is possible to integrate the hydrodynamic type system with the Hamiltonian H=∫\\|Ω\\|dr and some other systems. It is also demonstrated that these representations allow one to remove from the noncanonical Poisson brackets, defined in the space of divergence-free vector fields, the degeneracy connected with the vorticity frozenness for the Euler equation and with magnetic field frozenness for ideal MHD. For MHD, a new Weber-type transformation is found. It is shown how this transformation can be obtained from the two-fluid model when electrons and ions can be considered as two independent fluids. The Weber-type transformation for ideal MHD gives the whole Lagrangian vector invariant. When this invariant is absent, this transformation coincides with the Clebsch representation analog introduced by V.E. Zakharov and E. A. Kuznetsov [Dokl. Ajad. Nauk 194, 1288 (1970) [Sov. Phys. Dokl. 15, 913 (1971)
Shim, Jae Kun; Karol, Sohit; Hsu, Jeffrey; de Oliveira, Marcio Alves
2008-04-01
The aim of this study was to investigate the contralateral motor overflow in children during single-finger and multi-finger maximum force production tasks. Forty-five right handed children, 5-11 years of age produced maximum isometric pressing force in flexion or extension with single fingers or all four fingers of their right hand. The forces produced by individual fingers of the right and left hands were recorded and analyzed in four-dimensional finger force vector space. The results showed that increases in task (right) hand finger forces were linearly associated with non-task (left) hand finger forces. The ratio of the non-task hand finger force magnitude to the corresponding task hand finger force magnitude, termed motor overflow magnitude (MOM), was greater in extension than flexion. The index finger flexion task showed the smallest MOM values. The similarity between the directions of task hand and non-task hand finger force vectors in four-dimensional finger force vector space, termed motor overflow direction (MOD), was the greatest for index and smallest for little finger tasks. MOM of a four-finger task was greater than the sum of MOMs of single-finger tasks, and this phenomenon was termed motor overflow surplus. Contrary to previous studies, no single-finger or four-finger tasks showed significant changes of MOM or MOD with the age of children. We conclude that the contralateral motor overflow in children during finger maximum force production tasks is dependent upon the task fingers and the magnitude and direction of task finger forces.
Intrinsic Bayesian Active Contours for Extraction of Object Boundaries in Images
Srivastava, Anuj
2010-01-01
We present a framework for incorporating prior information about high-probability shapes in the process of contour extraction and object recognition in images. Here one studies shapes as elements of an infinite-dimensional, non-linear quotient space, and statistics of shapes are defined and computed intrinsically using differential geometry of this shape space. Prior models on shapes are constructed using probability distributions on tangent bundles of shape spaces. Similar to the past work on active contours, where curves are driven by vector fields based on image gradients and roughness penalties, we incorporate the prior shape knowledge in the form of vector fields on curves. Through experimental results, we demonstrate the use of prior shape models in the estimation of object boundaries, and their success in handling partial obscuration and missing data. Furthermore, we describe the use of this framework in shape-based object recognition or classification. PMID:21076692
DIA-datasnooping and identifiability
NASA Astrophysics Data System (ADS)
Zaminpardaz, S.; Teunissen, P. J. G.
2018-04-01
In this contribution, we present and analyze datasnooping in the context of the DIA method. As the DIA method for the detection, identification and adaptation of mismodelling errors is concerned with estimation and testing, it is the combination of both that needs to be considered. This combination is rigorously captured by the DIA estimator. We discuss and analyze the DIA-datasnooping decision probabilities and the construction of the corresponding partitioning of misclosure space. We also investigate the circumstances under which two or more hypotheses are nonseparable in the identification step. By means of a theorem on the equivalence between the nonseparability of hypotheses and the inestimability of parameters, we demonstrate that one can forget about adapting the parameter vector for hypotheses that are nonseparable. However, as this concerns the complete vector and not necessarily functions of it, we also show that parameter functions may exist for which adaptation is still possible. It is shown how this adaptation looks like and how it changes the structure of the DIA estimator. To demonstrate the performance of the various elements of DIA-datasnooping, we apply the theory to some selected examples. We analyze how geometry changes in the measurement setup affect the testing procedure, by studying their partitioning of misclosure space, the decision probabilities and the minimal detectable and identifiable biases. The difference between these two minimal biases is highlighted by showing the difference between their corresponding contributing factors. We also show that if two alternative hypotheses, say Hi and Hj , are nonseparable, the testing procedure may have different levels of sensitivity to Hi -biases compared to the same Hj -biases.
NASA Astrophysics Data System (ADS)
Hirasawa, Kazuki; Sawada, Shinya; Saitoh, Atsushi
The system watching over elder's life is very important in a super-aged society Japan. In this paper, we describe a method to recognize resident's daily activities by means of using the information of indoor ambient atmosphere changes. The measuring targets of environmental changes are of gas and smell, temperature, humidity, and brightness. Those changes have much relation with resident's daily activities. The measurement system with 7 sensors (4 gas sensors, a thermistor, humidity sensor, and CdS light sensor) was developed for getting indoor ambient atmosphere changes. Some measurements were done in a one-room type residential space. 21 dimensional activity vectors were composed for each daily activity from acquired data. Those vectors were classified into 9 categories that were main activities by using Self-Organizing Map (SOM) method. From the result, it was found that the recognition of main daily activities based on information on indoor ambient atmosphere changes is possible. Moreover, we also describe the method for getting information of local gas and smell environmental changes. Gas and smell environmental changes are related with daily activities, especially very important action, eating and drinking. And, local information enables the relation of the place and the activity. For such a purpose, a gas sensing module with the operation function that synchronizes with human detection signal was developed and evaluated. From the result, the sensor module had the ability to acquire and to emphasize local gas environment changes caused by the person's activity.
Expression of short hairpin RNAs using the compact architecture of retroviral microRNA genes.
Burke, James M; Kincaid, Rodney P; Aloisio, Francesca; Welch, Nicole; Sullivan, Christopher S
2017-09-29
Short hairpin RNAs (shRNAs) are effective in generating stable repression of gene expression. RNA polymerase III (RNAP III) type III promoters (U6 or H1) are typically used to drive shRNA expression. While useful for some knockdown applications, the robust expression of U6/H1-driven shRNAs can induce toxicity and generate heterogeneous small RNAs with undesirable off-target effects. Additionally, typical U6/H1 promoters encompass the majority of the ∼270 base pairs (bp) of vector space required for shRNA expression. This can limit the efficacy and/or number of delivery vector options, particularly when delivery of multiple gene/shRNA combinations is required. Here, we develop a compact shRNA (cshRNA) expression system based on retroviral microRNA (miRNA) gene architecture that uses RNAP III type II promoters. We demonstrate that cshRNAs coded from as little as 100 bps of total coding space can precisely generate small interfering RNAs (siRNAs) that are active in the RNA-induced silencing complex (RISC). We provide an algorithm with a user-friendly interface to design cshRNAs for desired target genes. This cshRNA expression system reduces the coding space required for shRNA expression by >2-fold as compared to the typical U6/H1 promoters, which may facilitate therapeutic RNAi applications where delivery vector space is limiting. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Meadows, Steven
1997-10-01
Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.
A Vector Space Model for Automatic Indexing.
ERIC Educational Resources Information Center
Salton, G.; And Others
In a document retrieval, or other pattern matching environment where stored entities (documents) are compared with each other, or with incoming patterns (search requests), it appears that the best indexing (property) space is one where each entity lies as far away from the others as possible; that is, retrieval performance correlates inversely…
Closeup view of an Aft Skirt being prepared for mating ...
Close-up view of an Aft Skirt being prepared for mating with sub assemblies in the Solid Rocket Booster Assembly and Refurbishment Facility at Kennedy Space Center. The most prominent feature in this view are the six Thrust Vector Control System access ports, three per hydraulic actuator. - Space Transportation System, Solid Rocket Boosters, Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
Algebra and topology for applications to physics
NASA Technical Reports Server (NTRS)
Rozhkov, S. S.
1987-01-01
The principal concepts of algebra and topology are examined with emphasis on applications to physics. In particular, attention is given to sets and mapping; topological spaces and continuous mapping; manifolds; and topological groups and Lie groups. The discussion also covers the tangential spaces of the differential manifolds, including Lie algebras, vector fields, and differential forms, properties of differential forms, mapping of tangential spaces, and integration of differential forms.
NASA Astrophysics Data System (ADS)
Bobra, M. G.; Sun, X.; Hoeksema, J. T.; Turmon, M.; Liu, Y.; Hayashi, K.; Barnes, G.; Leka, K. D.
2014-09-01
A new data product from the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) called Space-weather HMI Active Region Patches ( SHARPs) is now available. SDO/HMI is the first space-based instrument to map the full-disk photospheric vector magnetic field with high cadence and continuity. The SHARP data series provide maps in patches that encompass automatically tracked magnetic concentrations for their entire lifetime; map quantities include the photospheric vector magnetic field and its uncertainty, along with Doppler velocity, continuum intensity, and line-of-sight magnetic field. Furthermore, keywords in the SHARP data series provide several parameters that concisely characterize the magnetic-field distribution and its deviation from a potential-field configuration. These indices may be useful for active-region event forecasting and for identifying regions of interest. The indices are calculated per patch and are available on a twelve-minute cadence. Quick-look data are available within approximately three hours of observation; definitive science products are produced approximately five weeks later. SHARP data are available at jsoc.stanford.edu and maps are available in either of two different coordinate systems. This article describes the SHARP data products and presents examples of SHARP data and parameters.
Efficiency Evaluation of Nozawa-Style Black Light Trap for Control of Anopheline Mosquitoes
Lee, Hee Il; Seo, Bo Youl; Shin, E-Hyun; Burkett, Douglas A.; Lee, Jong-Koo
2009-01-01
House-residual spraying and insecticide-treated bed nets have achieved some success in controlling anthropophilic and endophagic vectors. However, these methods have relatively low efficacy in Korea because Anopheles sinensis, the primary malaria vector, is highly zoophilic and exophilic. So, we focused our vector control efforts within livestock enclosures using ultraviolet black light traps as a mechanical control measure. We found that black light traps captured significantly more mosquitoes at 2 and 2.5 m above the ground (P < 0.05). We also evaluated the effectiveness of trap spacing within the livestock enclosure. In general, traps spaced between 4 and 7 m apart captured mosquitoes more efficiently than those spaced closer together (P > 0.05). Based on these findings, we concluded that each black light trap in the livestock enclosures killed 7,586 female mosquitoes per trap per night during the peak mosquito season (July-August). In May-August 2003, additional concurrent field trials were conducted in Ganghwa county. We got 74.9% reduction (P < 0.05) of An. sinensis in human dwellings and 61.5% reduction (P > 0.05) in the livestock enclosures. The black light trap operation in the livestock enclosures proved to be an effective control method and should be incorporated into existing control strategies in developed countries. PMID:19488423
Feature-space-based FMRI analysis using the optimal linear transformation.
Sun, Fengrong; Morris, Drew; Lee, Wayne; Taylor, Margot J; Mills, Travis; Babyn, Paul S
2010-09-01
The optimal linear transformation (OLT), an image analysis technique of feature space, was first presented in the field of MRI. This paper proposes a method of extending OLT from MRI to functional MRI (fMRI) to improve the activation-detection performance over conventional approaches of fMRI analysis. In this method, first, ideal hemodynamic response time series for different stimuli were generated by convolving the theoretical hemodynamic response model with the stimulus timing. Second, constructing hypothetical signature vectors for different activity patterns of interest by virtue of the ideal hemodynamic responses, OLT was used to extract features of fMRI data. The resultant feature space had particular geometric clustering properties. It was then classified into different groups, each pertaining to an activity pattern of interest; the applied signature vector for each group was obtained by averaging. Third, using the applied signature vectors, OLT was applied again to generate fMRI composite images with high SNRs for the desired activity patterns. Simulations and a blocked fMRI experiment were employed for the method to be verified and compared with the general linear model (GLM)-based analysis. The simulation studies and the experimental results indicated the superiority of the proposed method over the GLM-based analysis in detecting brain activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, A.R.; Bartell, S.M.
1988-06-01
The state of an ecosystem at any time t may be characterized by a multidimensional state vector x(t). Changes in state are represented by the trajectory traced out by x(t) over time. The effects of toxicant stress are summarized by the displacement of a perturbed state vector, x/sub p/(t), relative to an appropriate control, x/sub c/(t). Within a multivariate statistical framework, the response of an ecosystem to perturbation is conveniently quantified by the distance separating x/sub p/(t) from x/sub c/(t) as measured by a Mahalanobis metric. Use of the Mahalanobis metric requires that the covariance matrix associated with the controlmore » state vector be estimated. State space displacement analysis was applied to data on the response of aquatic microcosms and outdoor ponds to alkylphenols. Dose-response relationships were derived using calculated state space separations as integrated measures of the ecological effects of toxicant exposure. Inspection of the data also revealed that the covariance structure varied both with time and with toxicant exposure, suggesting that analysis of such changes might be a useful tool for probing control mechanisms underlying ecosystem dynamics. 90 refs., 53 figs., 9 tabs.« less
Arrows as anchors: An analysis of the material features of electric field vector arrows
NASA Astrophysics Data System (ADS)
Gire, Elizabeth; Price, Edward
2014-12-01
Representations in physics possess both physical and conceptual aspects that are fundamentally intertwined and can interact to support or hinder sense making and computation. We use distributed cognition and the theory of conceptual blending with material anchors to interpret the roles of conceptual and material features of representations in students' use of representations for computation. We focus on the vector-arrows representation of electric fields and describe this representation as a conceptual blend of electric field concepts, physical space, and the material features of the representation (i.e., the physical writing and the surface upon which it is drawn). In this representation, spatial extent (e.g., distance on paper) is used to represent both distances in coordinate space and magnitudes of electric field vectors. In conceptual blending theory, this conflation is described as a clash between the input spaces in the blend. We explore the benefits and drawbacks of this clash, as well as other features of this representation. This analysis is illustrated with examples from clinical problem-solving interviews with upper-division physics majors. We see that while these intermediate physics students make a variety of errors using this representation, they also use the geometric features of the representation to add electric field contributions and to organize the problem situation productively.
The Local Stellar Velocity Field via Vector Spherical Harmonics
NASA Technical Reports Server (NTRS)
Markarov, V. V.; Murphy, D. W.
2007-01-01
We analyze the local field of stellar tangential velocities for a sample of 42,339 nonbinary Hipparcos stars with accurate parallaxes, using a vector spherical harmonic formalism. We derive simple relations between the parameters of the classical linear model (Ogorodnikov-Milne) of the local systemic field and low-degree terms of the general vector harmonic decomposition. Taking advantage of these relationships, we determine the solar velocity with respect to the local stars of (V(sub X), V(sub Y), V(sub Z)) (10.5, 18.5, 7.3) +/- 0.1 km s(exp -1) not corrected for the asymmetric drift with respect to the local standard of rest. If only stars more distant than 100 pc are considered, the peculiar solar motion is (V(sub X), V(sub Y), V(sub Z)) (9.9, 15.6, 6.9) +/- 0.2 km s(exp -1). The adverse effects of harmonic leakage, which occurs between the reflex solar motion represented by the three electric vector harmonics in the velocity space and higher degree harmonics in the proper-motion space, are eliminated in our analysis by direct subtraction of the reflex solar velocity in its tangential components for each star. The Oort parameters determined by a straightforward least-squares adjustment in vector spherical harmonics are A=14.0 +/- 1.4, B=13.1 +/- 1.2, K=1.1 +/- 1.8, and C=2.9 +/- 1.4 km s(exp -1) kpc(exp -1). The physical meaning and the implications of these parameters are discussed in the framework of a general linear model of the velocity field. We find a few statistically significant higher degree harmonic terms that do not correspond to any parameters in the classical linear model. One of them, a third-degree electric harmonic, is tentatively explained as the response to a negative linear gradient of rotation velocity with distance from the Galactic plane, which we estimate at approximately -20 km s(exp -1) kpc(exp -1). A similar vertical gradient of rotation velocity has been detected for more distant stars representing the thick disk (z greater than 1 kpc), but here we surmise its existence in the thin disk at z less than 200 pc. The most unexpected and unexplained term within the Ogorodnikov-Milne model is the first-degree magnetic harmonic, representing a rigid rotation of the stellar field about the axis -Y pointing opposite to the direction of rotation. This harmonic comes out with a statistically robust coefficient of 6.2 +/- 0.9 km s(exp -1) kpc(exp -1) and is also present in the velocity field of more distant stars. The ensuing upward vertical motion of stars in the general direction of the Galactic center and the downward motion in the anticenter direction are opposite to the vector field expected from the stationary Galactic warp model.
NASA Technical Reports Server (NTRS)
Craig, R. G. (Principal Investigator)
1983-01-01
Richmond, Virginia and Denver, Colorado were study sites in an effort to determine the effect of autocorrelation on the accuracy of a parallelopiped classifier of LANDSAT digital data. The autocorrelation was assumed to decay to insignificant levels when sampled at distances of at least ten pixels. Spectral themes developed using blocks of adjacent pixels, and using groups of pixels spaced at least 10 pixels apart were used. Effects of geometric distortions were minimized by using only pixels from the interiors of land cover sections. Accuracy was evaluated for three classes; agriculture, residential and "all other"; both type 1 and type 2 errors were evaluated by means of overall classification accuracy. All classes give comparable results. Accuracy is approximately the same in both techniques; however, the variance in accuracy is significantly higher using the themes developed from autocorrelated data. The vectors of mean spectral response were nearly identical regardless of sampling method used. The estimated variances were much larger when using autocorrelated pixels.
The 1D Richards' equation in two layered soils: a Filippov approach to treat discontinuities
NASA Astrophysics Data System (ADS)
Berardi, Marco; Difonzo, Fabio; Vurro, Michele; Lopez, Luciano
2018-05-01
The infiltration process into the soil is generally modeled by the Richards' partial differential equation (PDE). In this paper a new approach for modeling the infiltration process through the interface of two different soils is proposed, where the interface is seen as a discontinuity surface defined by suitable state variables. Thus, the original 1D Richards' PDE, enriched by a particular choice of the boundary conditions, is first approximated by means of a time semidiscretization, that is by means of the transversal method of lines (TMOL). In such a way a sequence of discontinuous initial value problems, described by a sequence of second order differential systems in the space variable, is derived. Then, Filippov theory on discontinuous dynamical systems may be applied in order to study the relevant dynamics of the problem. The numerical integration of the semidiscretized differential system will be performed by using a one-step method, which employs an event driven procedure to locate the discontinuity surface and to adequately change the vector field.
Towards non-classical walks with bright laser pulses
NASA Astrophysics Data System (ADS)
Sephton, B.; Dudley, A.; Forbes, A.
2017-08-01
In the avid search for means to increase computational power in comparison to that which is currently available, quantum walks (QWs) have become a promising option with derived quantum algorithms providing an associated speed up compared to what is currently used for implementation in classical computers. It has additionally been shown that the physical implementation of QWs will provide a successful computational basis for a quantum computer. It follows that considerable drive for finding such means has been occurring over the 20+ years since its introduction with phenomena such as electrons and photons being employed. Principal problems encountered with such quantum systems involve the vulnerability to environmental influence as well as scalability of the systems. Here we outline how to perform the QW due to interference characteristics inherent in the phenomenon, to mitigate these challenges. We utilize the properties of vector beams to physically implement such a walk in orbital angular momentum space by manipulating polarization and exploiting the non-separability of such beams.
Programmable growth of branched silicon nanowires using a focused ion beam.
Jun, Kimin; Jacobson, Joseph M
2010-08-11
Although significant progress has been made in being able to spatially define the position of material layers in vapor-liquid-solid (VLS) grown nanowires, less work has been carried out in deterministically defining the positions of nanowire branching points to facilitate more complicated structures beyond simple 1D wires. Work to date has focused on the growth of randomly branched nanowire structures. Here we develop a means for programmably designating nanowire branching points by means of focused ion beam-defined VLS catalytic points. This technique is repeatable without losing fidelity allowing multiple rounds of branching point definition followed by branch growth resulting in complex structures. The single crystal nature of this approach allows us to describe resulting structures with linear combinations of base vectors in three-dimensional (3D) space. Finally, by etching the resulting 3D defined wire structures branched nanotubes were fabricated with interconnected nanochannels inside. We believe that the techniques developed here should comprise a useful tool for extending linear VLS nanowire growth to generalized 3D wire structures.
1993-01-01
The development of the electric space actuator represents an unusual case of space technology transfer wherein the product was commercialized before it was used for the intended space purpose. MOOG, which supplies the thrust vector control hydraulic actuators for the Space Shuttle and brake actuators for the Space Orbiter, initiated development of electric actuators for aerospace and industrial use in the early 1980s. NASA used the technology to develop an electric replacement for the Space Shuttle main engine TVC actuator. An electric actuator is used to take passengers on a realistic flight to Jupiter at the US Space and Rocket Center, Huntsville, Alabama.
Pre-vector variational inequality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Lai-Jiu
1994-12-31
Let X be a Hausdorff topological vector space, (Y, D) be an ordered Hausdorff topological vector space ordered by convex cone D. Let L(X, Y) be the space of all bounded linear operator, E {improper_subset} X be a nonempty set, T : E {yields} L(X, Y), {eta} : E {times} E {yields} E be functions. For x, y {element_of} Y, we denote x {not_lt} y if y - x intD, where intD is the interior of D. We consider the following two problems: Find x {element_of} E such that < T(x), {eta}(y, x) > {not_lt} 0 for all y {element_of}more » E and find x {element_of} E, < T(x), {eta}(y, x) > {not_gt} 0 for all y {element_of} E and < T(x), {eta}(y, x) >{element_of} C{sub p}{sup w+} = {l_brace} {element_of} L(X, Y) {vert_bar}< l, {eta}(x, 0) >{not_lt} 0 for all x {element_of} E{r_brace} where < T(x), y > denotes linear operator T(x) at y, that is T(x), (y). We called Pre-VVIP the Pre-vector variational inequality problem and Pre-VCP complementary problem. If X = R{sup n}, Y = R, D = R{sub +} {eta}(y, x) = y - x, then our problem is the well-known variational inequality first studies by Hartman and Stampacchia. If Y = R, D = R{sub +}, {eta}(y, x) = y - x, our problem is the variational problem in infinite dimensional space. In this research, we impose different condition on T(x), {eta}, X, and < T(x), {eta}(y, x) > and investigate the existences theorem of these problems. As an application of one of our results, we establish the existence theorem of weak minimum of the problem. (P) V - min f(x) subject to x {element_of} E where f : X {yields} Y si a Frechet differentiable invex function.« less
Connection between the two branches of the quantum two-stream instability across the k space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bret, A.; Haas, F.
2010-05-15
The stability of two quantum counterstreaming electron beams is investigated within the quantum plasma fluid equations for arbitrarily oriented wave vectors k. The analysis reveals that the two quantum two-stream unstable branches are indeed connected by a continuum of unstable modes with oblique wave vectors. Using the longitudinal approximation, the stability domain for any k is analytically explained, together with the growth rate.
Field Computation and Nonpropositional Knowledge.
1987-09-01
field computer It is based on xeneralization of Taylor’s theorem to continuous dimensional vector spaces. 20. DISTRIBUTION/AVAILABILITY OF ABSTRACT 21...generalization of Taylor’s theorem to continuous dimensional vector -5paces A number of field computations are illustrated, including several Lransforma...paradigm. The "old" Al has been quite successful in performing a number of difficult tasks, such as theorem prov- ing, chess playing, medical diagnosis and
DYMAFLEX: DYnamic Manipulation FLight EXperiment
2013-09-03
thrust per nozzle and minimize propellant mass and tank mass. This study compared carbon dioxide, nitrous oxide, and R134-A. These results were...equations of mo- tion of a space manipulator, showing their top- level, matrix- vector representation to be of iden- tical form to those of a fixed-base...the system inertia matrix, q is the po- sition state vector (consisting of the manipulator joint angles θ, spacecraft attitude quaternion, and
NASA Astrophysics Data System (ADS)
Miao, Xijiang; Mukhopadhyay, Rishi; Valafar, Homayoun
2008-10-01
Advances in NMR instrumentation and pulse sequence design have resulted in easier acquisition of Residual Dipolar Coupling (RDC) data. However, computational and theoretical analysis of this type of data has continued to challenge the international community of investigators because of their complexity and rich information content. Contemporary use of RDC data has required a-priori assignment, which significantly increases the overall cost of structural analysis. This article introduces a novel algorithm that utilizes unassigned RDC data acquired from multiple alignment media ( nD-RDC, n ⩾ 3) for simultaneous extraction of the relative order tensor matrices and reconstruction of the interacting vectors in space. Estimation of the relative order tensors and reconstruction of the interacting vectors can be invaluable in a number of endeavors. An example application has been presented where the reconstructed vectors have been used to quantify the fitness of a template protein structure to the unknown protein structure. This work has other important direct applications such as verification of the novelty of an unknown protein and validation of the accuracy of an available protein structure model in drug design. More importantly, the presented work has the potential to bridge the gap between experimental and computational methods of structure determination.
NASA Astrophysics Data System (ADS)
Gusain, S.
2017-12-01
We study the hemispheric patterns in electric current helicity distribution on the Sun. Magnetic field vector in the photosphere is now routinely measured by variety of instruments. SOLIS/VSM of NSO observes full disk Stokes spectra in photospheric lines which are used to derive vector magnetograms. Hinode SP is a space based spectropolarimeter which has the same observable as SOLIS albeit with limited field-of-view (FOV) but high spatial resolution. SDO/HMI derives vector magnetograms from full disk Stokes measurements, with rather limited spectral resolution, from space in a different photospheric line. Further, these datasets now exist for several years. SOLIS/VSM from 2003, Hinode SP from 2006, and SDO HMI since 2010. Using these time series of vector magnetograms we compute the electric current density in active regions during solar cycle 24 and study the hemispheric distributions. Many studies show that the helicity parameters and proxies show a strong hemispheric bias, such that Northern hemisphere has preferentially negative and southern positive helicity, respectively. We will confirm these results for cycle 24 from three different datasets and evaluate the statistical significance of the hemispheric bias. Further, we discuss the solar cycle variation in the hemispheric helicity pattern during cycle 24 and discuss its implications in terms of solar dynamo models.
An alternative subspace approach to EEG dipole source localization
NASA Astrophysics Data System (ADS)
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-01
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.
Fast higher-order MR image reconstruction using singular-vector separation.
Wilm, Bertram J; Barmet, Christoph; Pruessmann, Klaas P
2012-07-01
Medical resonance imaging (MRI) conventionally relies on spatially linear gradient fields for image encoding. However, in practice various sources of nonlinear fields can perturb the encoding process and give rise to artifacts unless they are suitably addressed at the reconstruction level. Accounting for field perturbations that are neither linear in space nor constant over time, i.e., dynamic higher-order fields, is particularly challenging. It was previously shown to be feasible with conjugate-gradient iteration. However, so far this approach has been relatively slow due to the need to carry out explicit matrix-vector multiplications in each cycle. In this work, it is proposed to accelerate higher-order reconstruction by expanding the encoding matrix such that fast Fourier transform can be employed for more efficient matrix-vector computation. The underlying principle is to represent the perturbing terms as sums of separable functions of space and time. Compact representations with this property are found by singular-vector analysis of the perturbing matrix. Guidelines for balancing the accuracy and speed of the resulting algorithm are derived by error propagation analysis. The proposed technique is demonstrated for the case of higher-order field perturbations due to eddy currents caused by diffusion weighting. In this example, image reconstruction was accelerated by two orders of magnitude.
Students' difficulties with vector calculus in electrodynamics
NASA Astrophysics Data System (ADS)
Bollen, Laurens; van Kampen, Paul; De Cock, Mieke
2015-12-01
Understanding Maxwell's equations in differential form is of great importance when studying the electrodynamic phenomena discussed in advanced electromagnetism courses. It is therefore necessary that students master the use of vector calculus in physical situations. In this light we investigated the difficulties second year students at KU Leuven encounter with the divergence and curl of a vector field in mathematical and physical contexts. We have found that they are quite skilled at doing calculations, but struggle with interpreting graphical representations of vector fields and applying vector calculus to physical situations. We have found strong indications that traditional instruction is not sufficient for our students to fully understand the meaning and power of Maxwell's equations in electrodynamics.
A selective-update affine projection algorithm with selective input vectors
NASA Astrophysics Data System (ADS)
Kong, NamWoong; Shin, JaeWook; Park, PooGyeon
2011-10-01
This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.
Pre-existing immunity against vaccine vectors – friend or foe?
Saxena, Manvendra; Van, Thi Thu Hao; Baird, Fiona J.; Coloe, Peter J.
2013-01-01
Over the last century, the successful attenuation of multiple bacterial and viral pathogens has led to an effective, robust and safe form of vaccination. Recently, these vaccines have been evaluated as delivery vectors for heterologous antigens, as a means of simultaneous vaccination against two pathogens. The general consensus from published studies is that these vaccine vectors have the potential to be both safe and efficacious. However, some of the commonly employed vectors, for example Salmonella and adenovirus, often have pre-existing immune responses in the host and this has the potential to modify the subsequent immune response to a vectored antigen. This review examines the literature on this topic, and concludes that for bacterial vectors there can in fact, in some cases, be an enhancement in immunogenicity, typically humoral, while for viral vectors pre-existing immunity is a hindrance for subsequent induction of cell-mediated responses. PMID:23175507
Papanikolaou, Eleni; Georgomanoli, Maria; Stamateris, Evangelos; Panetsos, Fottes; Karagiorga, Markisia; Tsaftaridis, Panagiotis; Graphakos, Stelios
2012-01-01
Abstract To address how low titer, variable expression, and gene silencing affect gene therapy vectors for hemoglobinopathies, in a previous study we successfully used the HPFH (hereditary persistence of fetal hemoglobin)-2 enhancer in a series of oncoretroviral vectors. On the basis of these data, we generated a novel insulated self-inactivating (SIN) lentiviral vector, termed GGHI, carrying the Aγ-globin gene with the −117 HPFH point mutation and the HPFH-2 enhancer and exhibiting a pancellular pattern of Aγ-globin gene expression in MEL-585 clones. To assess the eventual clinical feasibility of this vector, GGHI was tested on CD34+ hematopoietic stem cells from nonmobilized peripheral blood or bone marrow from 20 patients with β-thalassemia. Our results show that GGHI increased the production of γ-globin by 32.9% as measured by high-performance liquid chromatography (p=0.001), with a mean vector copy number per cell of 1.1 and a mean transduction efficiency of 40.3%. Transduced populations also exhibited a lower rate of apoptosis and resulted in improvement of erythropoiesis with a higher percentage of orthochromatic erythroblasts. This is the first report of a locus control region (LCR)-free SIN insulated lentiviral vector that can be used to efficiently produce the anticipated therapeutic levels of γ-globin protein in the erythroid progeny of primary human thalassemic hematopoietic stem cells in vitro. PMID:21875313
NASA Technical Reports Server (NTRS)
Bates, Lisa B.; Young, David T.
2012-01-01
This paper describes recent developmental testing to verify the integration of a developmental electromechanical actuator (EMA) with high rate lithium ion batteries and a cross platform extensible controller. Testing was performed at the Thrust Vector Control Research, Development and Qualification Laboratory at the NASA George C. Marshall Space Flight Center. Electric Thrust Vector Control (ETVC) systems like the EMA may significantly reduce recurring launch costs and complexity compared to heritage systems. Electric actuator mechanisms and control requirements across dissimilar platforms are also discussed with a focus on the similarities leveraged and differences overcome by the cross platform extensible common controller architecture.
A vector matching method for analysing logic Petri nets
NASA Astrophysics Data System (ADS)
Du, YuYue; Qi, Liang; Zhou, MengChu
2011-11-01
Batch processing function and passing value indeterminacy in cooperative systems can be described and analysed by logic Petri nets (LPNs). To directly analyse the properties of LPNs, the concept of transition enabling vector sets is presented and a vector matching method used to judge the enabling transitions is proposed in this article. The incidence matrix of LPNs is defined; an equation about marking change due to a transition's firing is given; and a reachable tree is constructed. The state space explosion is mitigated to a certain extent from directly analysing LPNs. Finally, the validity and reliability of the proposed method are illustrated by an example in electronic commerce.
Reduced basis technique for evaluating the sensitivity coefficients of the nonlinear tire response
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Tanner, John A.; Peters, Jeanne M.
1992-01-01
An efficient reduced-basis technique is proposed for calculating the sensitivity of nonlinear tire response to variations in the design variables. The tire is modeled using a 2-D, moderate rotation, laminated anisotropic shell theory, including the effects of variation in material and geometric parameters. The vector of structural response and its first-order and second-order sensitivity coefficients are each expressed as a linear combination of a small number of basis vectors. The effectiveness of the basis vectors used in approximating the sensitivity coefficients is demonstrated by a numerical example involving the Space Shuttle nose-gear tire, which is subjected to uniform inflation pressure.
Generalized decompositions of dynamic systems and vector Lyapunov functions
NASA Astrophysics Data System (ADS)
Ikeda, M.; Siljak, D. D.
1981-10-01
The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.
The Prediction of Broadband Shock-Associated Noise Including Propagation Effects
NASA Technical Reports Server (NTRS)
Miller, Steven; Morris, Philip J.
2011-01-01
An acoustic analogy is developed based on the Euler equations for broadband shock- associated noise (BBSAN) that directly incorporates the vector Green's function of the linearized Euler equations and a steady Reynolds-Averaged Navier-Stokes solution (SRANS) as the mean flow. The vector Green's function allows the BBSAN propagation through the jet shear layer to be determined. The large-scale coherent turbulence is modeled by two-point second order velocity cross-correlations. Turbulent length and time scales are related to the turbulent kinetic energy and dissipation. An adjoint vector Green's function solver is implemented to determine the vector Green's function based on a locally parallel mean flow at streamwise locations of the SRANS solution. However, the developed acoustic analogy could easily be based on any adjoint vector Green's function solver, such as one that makes no assumptions about the mean flow. The newly developed acoustic analogy can be simplified to one that uses the Green's function associated with the Helmholtz equation, which is consistent with the formulation of Morris and Miller (AIAAJ 2010). A large number of predictions are generated using three different nozzles over a wide range of fully expanded Mach numbers and jet stagnation temperatures. These predictions are compared with experimental data from multiple jet noise labs. In addition, two models for the so-called 'fine-scale' mixing noise are included in the comparisons. Improved BBSAN predictions are obtained relative to other models that do not include the propagation effects, especially in the upstream direction of the jet.
Simple cloning strategy using GFPuv gene as positive/negative indicator.
Miura, Hiromi; Inoko, Hidetoshi; Inoue, Ituro; Tanaka, Masafumi; Sato, Masahiro; Ohtsuka, Masato
2011-09-15
Because construction of expression vectors is the first requisite in the functional analysis of genes, development of simple cloning systems is a major requirement during the postgenomic era. In the current study, we developed cloning vectors for gain- or loss-of-function studies by using the GFPuv gene as a positive/negative indicator of cloning. These vectors allow us to easily detect correct clones and obtain expression vectors from a simple procedure by means of the combined use of the GFPuv gene and a type IIS restriction enzyme. Copyright © 2011 Elsevier Inc. All rights reserved.
A hybrid approach to select features and classify diseases based on medical data
NASA Astrophysics Data System (ADS)
AbdelLatif, Hisham; Luo, Jiawei
2018-03-01
Feature selection is popular problem in the classification of diseases in clinical medicine. Here, we developing a hybrid methodology to classify diseases, based on three medical datasets, Arrhythmia, Breast cancer, and Hepatitis datasets. This methodology called k-means ANOVA Support Vector Machine (K-ANOVA-SVM) uses K-means cluster with ANOVA statistical to preprocessing data and selection the significant features, and Support Vector Machines in the classification process. To compare and evaluate the performance, we choice three classification algorithms, decision tree Naïve Bayes, Support Vector Machines and applied the medical datasets direct to these algorithms. Our methodology was a much better classification accuracy is given of 98% in Arrhythmia datasets, 92% in Breast cancer datasets and 88% in Hepatitis datasets, Compare to use the medical data directly with decision tree Naïve Bayes, and Support Vector Machines. Also, the ROC curve and precision with (K-ANOVA-SVM) Achieved best results than other algorithms
High-order graph matching based feature selection for Alzheimer's disease identification.
Liu, Feng; Suk, Heung-Il; Wee, Chong-Yaw; Chen, Huafu; Shen, Dinggang
2013-01-01
One of the main limitations of l1-norm feature selection is that it focuses on estimating the target vector for each sample individually without considering relations with other samples. However, it's believed that the geometrical relation among target vectors in the training set may provide useful information, and it would be natural to expect that the predicted vectors have similar geometric relations as the target vectors. To overcome these limitations, we formulate this as a graph-matching feature selection problem between a predicted graph and a target graph. In the predicted graph a node is represented by predicted vector that may describe regional gray matter volume or cortical thickness features, and in the target graph a node is represented by target vector that include class label and clinical scores. In particular, we devise new regularization terms in sparse representation to impose high-order graph matching between the target vectors and the predicted ones. Finally, the selected regional gray matter volume and cortical thickness features are fused in kernel space for classification. Using the ADNI dataset, we evaluate the effectiveness of the proposed method and obtain the accuracies of 92.17% and 81.57% in AD and MCI classification, respectively.
State-Dependent Pseudo-Linear Filter for Spacecraft Attitude and Rate Estimation
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, Richard R.
2001-01-01
This paper presents the development and performance of a special algorithm for estimating the attitude and angular rate of a spacecraft. The algorithm is a pseudo-linear Kalman filter, which is an ordinary linear Kalman filter that operates on a linear model whose matrices are current state estimate dependent. The nonlinear rotational dynamics equation of the spacecraft is presented in the state space as a state-dependent linear system. Two types of measurements are considered. One type is a measurement of the quaternion of rotation, which is obtained from a newly introduced star tracker based apparatus. The other type of measurement is that of vectors, which permits the use of a variety of vector measuring sensors like sun sensors and magnetometers. While quaternion measurements are related linearly to the state vector, vector measurements constitute a nonlinear function of the state vector. Therefore, in this paper, a state-dependent linear measurement equation is developed for the vector measurement case. The state-dependent pseudo linear filter is applied to simulated spacecraft rotations and adequate estimates of the spacecraft attitude and rate are obtained for the case of quaternion measurements as well as of vector measurements.
Page segmentation using script identification vectors: A first look
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hochberg, J.; Cannon, M.; Kelly, P.
1997-07-01
Document images in which different scripts, such as Chinese and Roman, appear on a single page pose a problem for optical character recognition (OCR) systems. This paper explores the use of script identification vectors in the analysis of multilingual document images. A script identification vector is calculated for each connected component in a document. The vector expresses the closest distance between the component and templates developed for each of thirteen scripts, including Arabic, Chinese, Cyrillic, and Roman. The authors calculate the first three principal components within the resulting thirteen-dimensional space for each image. By mapping these components to red, green,more » and blue, they can visualize the information contained in the script identification vectors. The visualization of several multilingual images suggests that the script identification vectors can be used to segment images into script-specific regions as large as several paragraphs or as small as a few characters. The visualized vectors also reveal distinctions within scripts, such as font in Roman documents, and kanji vs. kana in Japanese. Results are best for documents containing highly dissimilar scripts such as Roman and Japanese. Documents containing similar scripts, such as Roman and Cyrillic will require further investigation.« less
Internal performance characteristics of thrust-vectored axisymmetric ejector nozzles
NASA Technical Reports Server (NTRS)
Lamb, Milton
1995-01-01
A series of thrust-vectored axisymmetric ejector nozzles were designed and experimentally tested for internal performance and pumping characteristics at the Langley research center. This study indicated that discontinuities in the performance occurred at low primary nozzle pressure ratios and that these discontinuities were mitigated by decreasing expansion area ratio. The addition of secondary flow increased the performance of the nozzles. The mid-to-high range of secondary flow provided the most overall improvements, and the greatest improvements were seen for the largest ejector area ratio. Thrust vectoring the ejector nozzles caused a reduction in performance and discharge coefficient. With or without secondary flow, the vectored ejector nozzles produced thrust vector angles that were equivalent to or greater than the geometric turning angle. With or without secondary flow, spacing ratio (ejector passage symmetry) had little effect on performance (gross thrust ratio), discharge coefficient, or thrust vector angle. For the unvectored ejectors, a small amount of secondary flow was sufficient to reduce the pressure levels on the shroud to provide cooling, but for the vectored ejector nozzles, a larger amount of secondary air was required to reduce the pressure levels to provide cooling.
Compressed Sensing and Electron Microscopy
2010-01-01
dimensional space IRn and so there is a lot of collapsing of information. For example, any vector η in the null space N = N (Φ) of Φ is mapped...assignment of the pixel intensity f̂P in the image. Thus, the pixels size is the same as the grid spacing h and we can ( with only a slight abuse of notation...offers a fresh view of signal/image acquisition and reconstruction.
An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-01-01
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-07-07
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.
Vector curvaton with varying kinetic function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimopoulos, Konstantinos; Karciauskas, Mindaugas; Wagstaff, Jacques M.
2010-01-15
A new model realization of the vector curvaton paradigm is presented and analyzed. The model consists of a single massive Abelian vector field, with a Maxwell-type kinetic term. By assuming that the kinetic function and the mass of the vector field are appropriately varying during inflation, it is shown that a scale-invariant spectrum of superhorizon perturbations can be generated. These perturbations can contribute to the curvature perturbation of the Universe. If the vector field remains light at the end of inflation it is found that it can generate substantial statistical anisotropy in the spectrum and bispectrum of the curvature perturbation.more » In this case the non-Gaussianity in the curvature perturbation is predominantly anisotropic, which will be a testable prediction in the near future. If, on the other hand, the vector field is heavy at the end of inflation then it is demonstrated that particle production is approximately isotropic and the vector field alone can give rise to the curvature perturbation, without directly involving any fundamental scalar field. The parameter space for both possibilities is shown to be substantial. Finally, toy models are presented which show that the desired variation of the mass and kinetic function of the vector field can be realistically obtained, without unnatural tunings, in the context of supergravity or superstrings.« less
NASA Astrophysics Data System (ADS)
Tondu, Bertrand
2003-05-01
The mathematical modelling of industrial robots is based on the vectorial nature of the n-dimensional joint space of the robot, defined as a kinematic chain with n degrees of freedom. However, in our opinion, the vectorial nature of the joint space has been insufficiently discussed in the literature. We establish the vectorial nature of the joint space of an industrial robot from the fundamental studies of B. Roth on screws. To cite this article: B. Tondu, C. R. Mecanique 331 (2003).
NASA Astrophysics Data System (ADS)
Furfaro, R.; Linares, R.; Gaylor, D.; Jah, M.; Walls, R.
2016-09-01
In this paper, we present an end-to-end approach that employs machine learning techniques and Ontology-based Bayesian Networks (BN) to characterize the behavior of resident space objects. State-of-the-Art machine learning architectures (e.g. Extreme Learning Machines, Convolutional Deep Networks) are trained on physical models to learn the Resident Space Object (RSO) features in the vectorized energy and momentum states and parameters. The mapping from measurements to vectorized energy and momentum states and parameters enables behavior characterization via clustering in the features space and subsequent RSO classification. Additionally, Space Object Behavioral Ontologies (SOBO) are employed to define and capture the domain knowledge-base (KB) and BNs are constructed from the SOBO in a semi-automatic fashion to execute probabilistic reasoning over conclusions drawn from trained classifiers and/or directly from processed data. Such an approach enables integrating machine learning classifiers and probabilistic reasoning to support higher-level decision making for space domain awareness applications. The innovation here is to use these methods (which have enjoyed great success in other domains) in synergy so that it enables a "from data to discovery" paradigm by facilitating the linkage and fusion of large and disparate sources of information via a Big Data Science and Analytics framework.
A new theory of phylogeny inference through construction of multidimensional vector space.
Kitazoe, Y; Kurihara, Y; Narita, Y; Okuhara, Y; Tominaga, A; Suzuki, T
2001-05-01
Here, a new theory of molecular phylogeny is developed in a multidimensional vector space (MVS). The molecular evolution is represented as a successive splitting of branch vectors in the MVS. The end points of these vectors are the extant species and indicate the specific directions reflected by their individual histories of evolution in the past. This representation makes it possible to infer the phylogeny (evolutionary histories) from the spatial positions of the end points. Search vectors are introduced to draw out the groups of species distributed around them. These groups are classified according to the nearby order of branches with them. A law of physics is applied to determine the species positions in the MVS. The species are regarded as the particles moving in time according to the equation of motion, finally falling into the lowest-energy state in spite of their randomly distributed initial condition. This falling into the ground state results in the construction of an MVS in which the relative distances between two particles are equal to the substitution distances. The species positions are obtained prior to the phylogeny inference. Therefore, as the number of species increases, the species vectors can be more specific in an MVS of a larger size, such that the vector analysis gives a more stable and reliable topology. The efficacy of the present method was examined by using computer simulations of molecular evolution in which all the branch- and end-point sequences of the trees are known in advance. In the phylogeny inference from the end points with 100 multiple data sets, the present method consistently reconstructed the correct topologies, in contrast to standard methods. In applications to 185 vertebrates in the alpha-hemoglobin, the vector analysis drew out the two lineage groups of birds and mammals. A core member of the mammalian radiation appeared at the base of the mammalian lineage. Squamates were isolated from the bird lineage to compose the outgroup, while the other living reptilians were directly coupled with birds without forming any sister groups. This result is in contrast to the morphological phylogeny and is also different from those of recent molecular analyses.
Thermodynamic integration of the free energy along a reaction coordinate in Cartesian coordinates
NASA Astrophysics Data System (ADS)
den Otter, W. K.
2000-05-01
A generalized formulation of the thermodynamic integration (TI) method for calculating the free energy along a reaction coordinate is derived. Molecular dynamics simulations with a constrained reaction coordinate are used to sample conformations. These are then projected onto conformations with a higher value of the reaction coordinate by means of a vector field. The accompanying change in potential energy plus the divergence of the vector field constitute the derivative of the free energy. Any vector field meeting some simple requirements can be used as the basis of this TI expression. Two classes of vector fields are of particular interest here. The first recovers the conventional TI expression, with its cumbersome dependence on a full set of generalized coordinates. As the free energy is a function of the reaction coordinate only, it should in principle be possible to derive an expression depending exclusively on the definition of the reaction coordinate. This objective is met by the second class of vector fields to be discussed. The potential of mean constraint force (PMCF) method, after averaging over the unconstrained momenta, falls in this second class. The new method is illustrated by calculations on the isomerization of n-butane, and is compared with existing methods.
NASA Technical Reports Server (NTRS)
Melis, Matthew E.
2003-01-01
NASA Glenn Research Center s Structural Mechanics Branch has years of expertise in using explicit finite element methods to predict the outcome of ballistic impact events. Shuttle engineers from the NASA Marshall Space Flight Center and NASA Kennedy Space Flight Center required assistance in assessing the structural loads that a newly proposed thrust vector control system for the space shuttle solid rocket booster (SRB) aft skirt would expect to see during its recovery splashdown.
NASA Astrophysics Data System (ADS)
Morzfeld, M.; Atkins, E.; Chorin, A. J.
2011-12-01
The task in data assimilation is to identify the state of a system from an uncertain model supplemented by a stream of incomplete and noisy data. The model is typically given in form of a discretization of an Ito stochastic differential equation (SDE), x(n+1) = R(x(n))+ G W(n), where x is an m-dimensional vector and n=0,1,2,.... The m-dimensional vector function R and the m x m matrix G depend on the SDE as well as on the discretization scheme, and W is an m-dimensional vector whose elements are independent standard normal variates. The data are y(n) = h(x(n))+QV(n) where h is a k-dimensional vector function, Q is a k x k matrix and V is a vector whose components are independent standard normal variates. One can use statistics of the conditional probability density (pdf) of the state given the observations, p(n+1)=p(x(n+1)|y(1), ... , y(n+1)), to identify the state x(n+1). Particle filters approximate p(n+1) by sequential Monte Carlo and rely on the recursive formulation of the target pdf, p(n+1)∝p(x(n+1)|x(n)) p(y(n+1)|x(n+1)). The pdf p(x(n+1)|x(n)) can be read off of the model equations to be a Gaussian with mean R(x(n)) and covariance matrix Σ = GG^T, where the T denotes a transposed; the pdf p(y(n+1)|x(n+1)) is a Gaussian with mean h(x(n+1)) and covariance QQ^T. In a sampling-importance-resampling (SIR) filter one samples new values for the particles from a prior pdf and then one weighs these samples with weights determined by the observations, to yield an approximation to p(n+1). Such weighting schemes often yield small weights for many of the particles. Implicit particle filtering overcomes this problem by using the observations to generate the particles, thus focusing attention on regions of large probability. A suitable algebraic equation that depends on the model and the observations is constructed for each particle, and its solution yields high probability samples of p(n+1). In the current formulation of the implicit particle filter, the state covariance matrix Σ is assumed to be non-singular. In the present work we consider the case where the covariance Σ is singular. This happens in particular when the noise is spatially smooth and can be represented by a small number of Fourier coefficients, as is often the case in geophysical applications. We derive an implicit filter for this problem and show that it is very efficient, because the filter operates in a space whose dimension is the rank of Σ, rather than the full model dimension. We compare the implicit filter to SIR, to the Ensemble Kalman Filter and to variational methods, and also study how information from data is propagated from observed to unobserved variables. We illustrate the theory on two coupled nonlinear PDE's in one space dimension that have been used as a test-bed for geomagnetic data assimilation. We observe that the implicit filter gives good results with few (2-10) particles, while SIR requires thousands of particles for similar accuracy. We also find lower limits to the accuracy of the filter's reconstruction as a function of data availability.
Edge Detection and Geometric Methods in Computer Vision,
1985-02-01
enlightening discussion) Derivations or Eqs. 3.29, 3.31, 3.32 (some statistics) Experimental results (pictures)-- not very informative, extensive or useful. lie... neurophysiology and hardware design. If one views 9 the state space as a free vector space on the labels over the field of weights (which we take to be R), then
NASA Marshall Space Flight Center solar observatory report, January - June 1993
NASA Technical Reports Server (NTRS)
Smith, J. E.
1993-01-01
This report provides a description of the NASA Marshall Space Flight Center's Solar Vector Magnetograph Facility and gives a summary of its observations and data reduction during January-June 1993. The systems that make up the facility are a magnetograph telescope, an H-alpha telescope, a Questar telescope, and a computer code.
NASA Marshall Space Flight Center Solar Observatory report, July - October 1993
NASA Technical Reports Server (NTRS)
Smith, J. E.
1994-01-01
This report provides a description of the NASA Marshall Space Flight Center's Solar Vector Magnetograph Facility and gives a summary of its observations and data reduction during June-October 1993. The systems that make up the facility are a magnetograph telescope, an H-alpha telescope, a Questar telescope, and a computer code.
NASA Marshall Space Flight Center Solar Observatory report, March - May 1994
NASA Technical Reports Server (NTRS)
Smith, J. E.
1994-01-01
This report provides a description of the NASA Marshall Space Flight Center's Solar Vector Magnetograph Facility and gives a summary of its observations and data reduction during March-May 1994. The systems that make up the facility are a magnetograph telescope, an H-alpha telescope, a Questar telescope, and a computer code.
The derivative and tangent operators of a motion in Lorentzian space
NASA Astrophysics Data System (ADS)
Durmaz, Olgun; Aktaş, Buşra; Gündoğan, Hali˙t
In this paper, by using Lorentzian matrix multiplication, L-Tangent operator is obtained in Lorentzian space. The L-Tangent operators related with planar, spherical and spatial motion are computed via special matrix groups. L-Tangent operators are related to vectors. Some illustrative examples for applications of L-Tangent operators are also presented.
Sighting the International Space Station
ERIC Educational Resources Information Center
Teets, Donald
2008-01-01
This article shows how to use six parameters describing the International Space Station's orbit to predict when and in what part of the sky observers can look for the station as it passes over their location. The method requires only a good background in trigonometry and some familiarity with elementary vector and matrix operations. An included…
G14A-06- Analysis of the DORIS, GNSS, SLR, VLBI and Gravimetric Time Series at the GGOS Core Sites
NASA Technical Reports Server (NTRS)
Moreaux, G.; Lemoine, F.; Luceri, V.; Pavlis, E.; MacMillan, D.; Bonvalot, S.; Saunier, J.
2017-01-01
Analysis of the time series at the 3-4 multi-technique GGOS sites to analyze and compare the spectral content of the space geodetic and gravity time series. Evaluate the level of agreement between the space geodesy measurements and the physical tie vectors.
Scattering by multiple cylinders located on both sides of an interface
NASA Astrophysics Data System (ADS)
Lee, Siu-Chun
2018-07-01
The solution for scattering by multiple parallel infinite cylinders located in adjacent half spaces with dissimilar refractive index is presented in this paper. The incident radiation is an arbitrarily polarized plane wave propagating in the upper half space in the plane perpendicular to the axis of the cylinders. The formulation of the electromagnetic field vectors utilized Hertz potentials that are expressed in terms of an expansion of cylindrical wave functions. It accounts for the near-field multiple scattering, Fresnel effect at the interface, and interaction between cylinders in both half spaces. Analytical formulas are derived for the electromagnetic field and Poynting vector in the far-field. The present solution provides the theoretical framework for deducing the solutions for scattering by cylinders located on either side of an interface irradiated by a propagating or an evanescent incident wave. Deduction of these solutions from the present formulation is demonstrated. Numerical results are presented to illustrate the frustration of total internal reflection and scattering of light beyond the critical angle by nanocylinders located in either or both half spaces.
Zimmermann, Karel; Gibrat, Jean-François
2010-01-04
Sequence comparisons make use of a one-letter representation for amino acids, the necessary quantitative information being supplied by the substitution matrices. This paper deals with the problem of finding a representation that provides a comprehensive description of amino acid intrinsic properties consistent with the substitution matrices. We present a Euclidian vector representation of the amino acids, obtained by the singular value decomposition of the substitution matrices. The substitution matrix entries correspond to the dot product of amino acid vectors. We apply this vector encoding to the study of the relative importance of various amino acid physicochemical properties upon the substitution matrices. We also characterize and compare the PAM and BLOSUM series substitution matrices. This vector encoding introduces a Euclidian metric in the amino acid space, consistent with substitution matrices. Such a numerical description of the amino acid is useful when intrinsic properties of amino acids are necessary, for instance, building sequence profiles or finding consensus sequences, using machine learning algorithms such as Support Vector Machine and Neural Networks algorithms.
Using Machine Learning for Advanced Anomaly Detection and Classification
NASA Astrophysics Data System (ADS)
Lane, B.; Poole, M.; Camp, M.; Murray-Krezan, J.
2016-09-01
Machine Learning (ML) techniques have successfully been used in a wide variety of applications to automatically detect and potentially classify changes in activity, or a series of activities by utilizing large amounts data, sometimes even seemingly-unrelated data. The amount of data being collected, processed, and stored in the Space Situational Awareness (SSA) domain has grown at an exponential rate and is now better suited for ML. This paper describes development of advanced algorithms to deliver significant improvements in characterization of deep space objects and indication and warning (I&W) using a global network of telescopes that are collecting photometric data on a multitude of space-based objects. The Phase II Air Force Research Laboratory (AFRL) Small Business Innovative Research (SBIR) project Autonomous Characterization Algorithms for Change Detection and Characterization (ACDC), contracted to ExoAnalytic Solutions Inc. is providing the ability to detect and identify photometric signature changes due to potential space object changes (e.g. stability, tumble rate, aspect ratio), and correlate observed changes to potential behavioral changes using a variety of techniques, including supervised learning. Furthermore, these algorithms run in real-time on data being collected and processed by the ExoAnalytic Space Operations Center (EspOC), providing timely alerts and warnings while dynamically creating collection requirements to the EspOC for the algorithms that generate higher fidelity I&W. This paper will discuss the recently implemented ACDC algorithms, including the general design approach and results to date. The usage of supervised algorithms, such as Support Vector Machines, Neural Networks, k-Nearest Neighbors, etc., and unsupervised algorithms, for example k-means, Principle Component Analysis, Hierarchical Clustering, etc., and the implementations of these algorithms is explored. Results of applying these algorithms to EspOC data both in an off-line "pattern of life" analysis as well as using the algorithms on-line in real-time, meaning as data is collected, will be presented. Finally, future work in applying ML for SSA will be discussed.
Verifiable Secret Redistribution for Threshold Sharing Schemes
2002-02-01
complete verification in our protocol, old shareholders broadcast a commitment to the secret to the new shareholders. We prove that the new...of an m − 1 degree polynomial from m of n points yields a constant term in 1 the polynomial that corresponds to the secret . In Blakley’s scheme [Bla79...the intersection of m of n vector spaces yields a one-dimensional vector that corresponds to the secret . Desmedt surveys other sharing schemes
Aspects of mutually unbiased bases in odd-prime-power dimensions
NASA Astrophysics Data System (ADS)
Chaturvedi, S.
2002-04-01
We rephrase the Wootters-Fields construction [W. K. Wootters and B. C. Fields, Ann. Phys. 191, 363 (1989)] of a full set of mutually unbiased bases in a complex vector space of dimensions N=pr, where p is an odd prime, in terms of the character vectors of the cyclic group G of order p. This form may be useful in explicitly writing down mutually unbiased bases for N=pr.
On vector-valued Poincaré series of weight 2
NASA Astrophysics Data System (ADS)
Meneses, Claudio
2017-10-01
Given a pair (Γ , ρ) of a Fuchsian group of the first kind, and a unitary representation ρ of Γ of arbitrary rank, the problem of construction of vector-valued Poincaré series of weight 2 is considered. Implications in the theory of parabolic bundles are discussed. When the genus of the group is zero, it is shown how an explicit basis for the space of these functions can be constructed.
Neurophysiological Study of Vector Responses to Repellents.
1980-08-01
vector organisms and of the relationship between the physiologic condition of the organisms and the generation and trans- mission of sensory... hemolymph space at the tip of the antenna and connected to ground. A similar recording electrode was inserted through the cuticle at the base of a...pieces of filter paper of uniform size and placed them in glass tubes, through which the airstream was passed, rather than in the gas bubbler flask
Chebabhi, Ali; Fellah, Mohammed Karim; Kessal, Abdelhalim; Benkhoris, Mohamed F
2016-07-01
In this paper is proposed a new balancing three-level three dimensional space vector modulation (B3L-3DSVM) strategy which uses a redundant voltage vectors to realize precise control and high-performance for a three phase three-level four-leg neutral point clamped (NPC) inverter based Shunt Active Power Filter (SAPF) for eliminate the source currents harmonics, reduce the magnitude of neutral wire current (eliminate the zero-sequence current produced by single-phase nonlinear loads), and to compensate the reactive power in the three-phase four-wire electrical networks. This strategy is proposed in order to gate switching pulses generation, dc bus voltage capacitors balancing (conserve equal voltage of the two dc bus capacitors), and to switching frequency reduced and fixed of inverter switches in same times. A Nonlinear Back Stepping Controllers (NBSC) are used for regulated the dc bus voltage capacitors and the SAPF injected currents to robustness, stabilizing the system and to improve the response and to eliminate the overshoot and undershoot of traditional PI (Proportional-Integral). Conventional three-level three dimensional space vector modulation (C3L-3DSVM) and B3L-3DSVM are calculated and compared in terms of error between the two dc bus voltage capacitors, SAPF output voltages and THDv, THDi of source currents, magnitude of source neutral wire current, and the reactive power compensation under unbalanced single phase nonlinear loads. The success, robustness, and the effectiveness of the proposed control strategies are demonstrated through simulation using Sim Power Systems and S-Function of MATLAB/SIMULINK. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Kristian Birkeland - The man and the scientist
NASA Technical Reports Server (NTRS)
Egeland, A.
1984-01-01
A review is presented of Birkeland's outstanding contributions to auroral theory and, in particular, to the foundation of modern magnetospheric physics. Birkeland's first years in research, after a study of mathematics and theoretical physics at the university, were concerned with Maxwell's theory, the investigation of electromagnetic waves in conductors, wave propagation in space, an energy transfer by means of electromagnetic waves, and a general expression for the Poynting vector. Experiments with cathode rays near a magnet in 1895, led Birkeland to the development of an auroral theory. This theory represented the first detailed, realistic explanation of the creation of an aurora. Attention is given to experiments conducted to verify the theory, the discovery of the polar elementary storm, and the deduction of auroral electric currents. Birkeland's background and education is also considered along with his personality.
NASA Astrophysics Data System (ADS)
Yepez-Martinez, Tochtli; Civitarese, Osvaldo; Hess, Peter O.
The SO(4) symmetry of a sector of the quantum chromodynamics (QCD) Hamiltonian was analyzed in a previous work. The numerical calculations were then restricted to a particle-hole (ph) space and the comparison with experimental data was reasonable in spite of the complexity of the QCD spectrum at low energy. Here on, we continue along this line of research and show our new results of the treatment of the QCD Hamiltonian in the SO(4) representation, including ground state correlations by means of the Random Phase Approximation (RPA). We are able to identify, within this model, states which may be associated to physical pseudo-scalar and vector mesons, like η,η‧,K,ρ,ω,ϕ, as well as the pion (π).
Random mechanics: Nonlinear vibrations, turbulences, seisms, swells, fatigue
NASA Astrophysics Data System (ADS)
Kree, P.; Soize, C.
The random modeling of physical phenomena, together with probabilistic methods for the numerical calculation of random mechanical forces, are analytically explored. Attention is given to theoretical examinations such as probabilistic concepts, linear filtering techniques, and trajectory statistics. Applications of the methods to structures experiencing atmospheric turbulence, the quantification of turbulence, and the dynamic responses of the structures are considered. A probabilistic approach is taken to study the effects of earthquakes on structures and to the forces exerted by ocean waves on marine structures. Theoretical analyses by means of vector spaces and stochastic modeling are reviewed, as are Markovian formulations of Gaussian processes and the definition of stochastic differential equations. Finally, random vibrations with a variable number of links and linear oscillators undergoing the square of Gaussian processes are investigated.
Betatron motion with coupling of horizontal and vertical degrees of freedom
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. A. Bogacz; V. A. Lebedev
2002-11-21
The Courant-Snyder parameterization of one-dimensional linear betatron motion is generalized to two-dimensional coupled linear motion. To represent the 4 x 4 symplectic transfer matrix the following ten parameters were chosen: four beta-functions, four alpha-functions and two betatron phase advances which have a meaning similar to the Courant-Snyder parameterization. Such a parameterization works equally well for weak and strong coupling and can be useful for analysis of coupled betatron motion in circular accelerators as well as in transfer lines. Similarly, the transfer matrix, the bilinear form describing the phase space ellipsoid and the second order moments are related to the eigen-vectors.more » Corresponding equations can be useful in interpreting tracking results and experimental data.« less
Handy elementary algebraic properties of the geometry of entanglement
NASA Astrophysics Data System (ADS)
Blair, Howard A.; Alsing, Paul M.
2013-05-01
The space of separable states of a quantum system is a hyperbolic surface in a high dimensional linear space, which we call the separation surface, within the exponentially high dimensional linear space containing the quantum states of an n component multipartite quantum system. A vector in the linear space is representable as an n-dimensional hypermatrix with respect to bases of the component linear spaces. A vector will be on the separation surface iff every determinant of every 2-dimensional, 2-by-2 submatrix of the hypermatrix vanishes. This highly rigid constraint can be tested merely in time asymptotically proportional to d, where d is the dimension of the state space of the system due to the extreme interdependence of the 2-by-2 submatrices. The constraint on 2-by-2 determinants entails an elementary closed formformula for a parametric characterization of the entire separation surface with d-1 parameters in the char- acterization. The state of a factor of a partially separable state can be calculated in time asymptotically proportional to the dimension of the state space of the component. If all components of the system have approximately the same dimension, the time complexity of calculating a component state as a function of the parameters is asymptotically pro- portional to the time required to sort the basis. Metric-based entanglement measures of pure states are characterized in terms of the separation hypersurface.
Remote sensing of earth terrain
NASA Technical Reports Server (NTRS)
Kong, J. A.
1988-01-01
Two monographs and 85 journal and conference papers on remote sensing of earth terrain have been published, sponsored by NASA Contract NAG5-270. A multivariate K-distribution is proposed to model the statistics of fully polarimetric data from earth terrain with polarizations HH, HV, VH, and VV. In this approach, correlated polarizations of radar signals, as characterized by a covariance matrix, are treated as the sum of N n-dimensional random vectors; N obeys the negative binomial distribution with a parameter alpha and mean bar N. Subsequently, and n-dimensional K-distribution, with either zero or non-zero mean, is developed in the limit of infinite bar N or illuminated area. The probability density function (PDF) of the K-distributed vector normalized by its Euclidean norm is independent of the parameter alpha and is the same as that derived from a zero-mean Gaussian-distributed random vector. The above model is well supported by experimental data provided by MIT Lincoln Laboratory and the Jet Propulsion Laboratory in the form of polarimetric measurements.
NASA Technical Reports Server (NTRS)
Edwards, C. L. W.; Meissner, F. T.; Hall, J. B.
1979-01-01
Color computer graphics techniques were investigated as a means of rapidly scanning and interpreting large sets of transient heating data. The data presented were generated to support the conceptual design of a heat-sink thermal protection system (TPS) for a hypersonic research airplane. Color-coded vector and raster displays of the numerical geometry used in the heating calculations were employed to analyze skin thicknesses and surface temperatures of the heat-sink TPS under a variety of trajectory flight profiles. Both vector and raster displays proved to be effective means for rapidly identifying heat-sink mass concentrations, regions of high heating, and potentially adverse thermal gradients. The color-coded (raster) surface displays are a very efficient means for displaying surface-temperature and heating histories, and thereby the more stringent design requirements can quickly be identified. The related hardware and software developments required to implement both the vector and the raster displays for this application are also discussed.
NASA Astrophysics Data System (ADS)
Dougherty, Andrew W.
Metal oxides are a staple of the sensor industry. The combination of their sensitivity to a number of gases, and the electrical nature of their sensing mechanism, make the particularly attractive in solid state devices. The high temperature stability of the ceramic material also make them ideal for detecting combustion byproducts where exhaust temperatures can be high. However, problems do exist with metal oxide sensors. They are not very selective as they all tend to be sensitive to a number of reduction and oxidation reactions on the oxide's surface. This makes sensors with large numbers of sensors interesting to study as a method for introducing orthogonality to the system. Also, the sensors tend to suffer from long term drift for a number of reasons. In this thesis I will develop a system for intelligently modeling metal oxide sensors and determining their suitability for use in large arrays designed to analyze exhaust gas streams. It will introduce prior knowledge of the metal oxide sensors' response mechanisms in order to produce a response function for each sensor from sparse training data. The system will use the same technique to model and remove any long term drift from the sensor response. It will also provide an efficient means for determining the orthogonality of the sensor to determine whether they are useful in gas sensing arrays. The system is based on least squares support vector regression using the reciprocal kernel. The reciprocal kernel is introduced along with a method of optimizing the free parameters of the reciprocal kernel support vector machine. The reciprocal kernel is shown to be simpler and to perform better than an earlier kernel, the modified reciprocal kernel. Least squares support vector regression is chosen as it uses all of the training points and an emphasis was placed throughout this research for extracting the maximum information from very sparse data. The reciprocal kernel is shown to be effective in modeling the sensor responses in the time, gas and temperature domains, and the dual representation of the support vector regression solution is shown to provide insight into the sensor's sensitivity and potential orthogonality. Finally, the dual weights of the support vector regression solution to the sensor's response are suggested as a fitness function for a genetic algorithm, or some other method for efficiently searching large parameter spaces.
Instantaneous brain dynamics mapped to a continuous state space.
Billings, Jacob C W; Medda, Alessio; Shakil, Sadia; Shen, Xiaohong; Kashyap, Amrit; Chen, Shiyang; Abbas, Anzar; Zhang, Xiaodi; Nezafati, Maysam; Pan, Wen-Ju; Berman, Gordon J; Keilholz, Shella D
2017-11-15
Measures of whole-brain activity, from techniques such as functional Magnetic Resonance Imaging, provide a means to observe the brain's dynamical operations. However, interpretation of whole-brain dynamics has been stymied by the inherently high-dimensional structure of brain activity. The present research addresses this challenge through a series of scale transformations in the spectral, spatial, and relational domains. Instantaneous multispectral dynamics are first developed from input data via a wavelet filter bank. Voxel-level signals are then projected onto a representative set of spatially independent components. The correlation distance over the instantaneous wavelet-ICA state vectors is a graph that may be embedded onto a lower-dimensional space to assist the interpretation of state-space dynamics. Applying this procedure to a large sample of resting-state and task-active data (acquired through the Human Connectome Project), we segment the empirical state space into a continuum of stimulus-dependent brain states. Upon observing the local neighborhood of brain-states adopted subsequent to each stimulus, we may conclude that resting brain activity includes brain states that are, at times, similar to those adopted during tasks, but that are at other times distinct from task-active brain states. As task-active brain states often populate a local neighborhood, back-projection of segments of the dynamical state space onto the brain's surface reveals the patterns of brain activity that support many experimentally-defined states. Copyright © 2017 Elsevier Inc. All rights reserved.
Analysis of the DORIS, GNSS, SLR, VLBI and gravimetric time series at the GGOS core sites
NASA Astrophysics Data System (ADS)
Moreaux, G.; Lemoine, F. G.; Luceri, V.; Pavlis, E. C.; MacMillan, D. S.; Bonvalot, S.; Saunier, J.
2017-12-01
Since June 2016 and the installation of a new DORIS station in Wettzell (Germany), four geodetic sites (Badary, Greenbelt, Wettzell and Yarragadee) are equipped with the four space geodetic techniques (DORIS, GNSS, SLR and VLBI). In line with the GGOS (Global Geodetic Observing System) objective of achieving a terrestrial reference frame at the millimetric level of accuracy, the combination centers of the four space techniques initiated a joint study to assess the level of agreement among these space geodetic techniques. In addition to the four sites, we will consider all the GGOS core sites including the seven sites with at least two space geodetic techniques in addition to DORIS. Starting from the coordinate time series, we will estimate and compare the mean positions and velocities of the co-located instruments. The temporal evolution of the coordinate differences will also be evaluated with respect to the local tie vectors and discrepancies will be investigated. Then, the analysis of the signal content of the time series will be carried out. Amplitudes and phases of the common signals among the techniques, and eventually from gravity data, will be compared. The first objective of this talk is to describe our joint study: the sites, the data, and the objectives. The second purpose is to present the first results obtained from the GGAO (Goddard Geophysical and Astronomic Observatory) site of Greenbelt.
Aslan, Hamide; Dey, Ranadhir; Meneses, Claudio; Castrovinci, Philip; Jeronimo, Selma Maria Bezerra; Oliva, Gætano; Fischer, Laurent; Duncan, Robert C.; Nakhasi, Hira L.; Valenzuela, Jesus G.; Kamhawi, Shaden
2013-01-01
Background. Visceral leishmaniasis (VL) is transmitted by sand flies. Protection of needle-challenged vaccinated mice was abrogated in vector-initiated cutaneous leishmaniasis, highlighting the importance of developing natural transmission models for VL. Methods. We used Lutzomyia longipalpis to transmit Leishmania infantum or Leishmania donovani to hamsters. Vector-initiated infections were monitored and compared with intracardiac infections. Body weights were recorded weekly. Organ parasite loads and parasite pick-up by flies were assessed in sick hamsters. Results. Vector-transmitted L. infantum and L. donovani caused ≥5-fold increase in spleen weight compared with uninfected organs and had geometric mean parasite loads (GMPL) comparable to intracardiac inoculation of 107–108 parasites, although vector-initiated disease progression was slower and weight loss was greater. Only vector-initiated L. infantum infections caused cutaneous lesions at transmission and distal sites. Importantly, 45.6%, 50.0%, and 33.3% of sand flies feeding on ear, mouth, and testicular lesions, respectively, were parasite-positive. Successful transmission was associated with a high mean percent of metacyclics (66%–82%) rather than total GMPL (2.0 × 104–8.0 × 104) per midgut. Conclusions. This model provides an improved platform to study initial immune events at the bite site, parasite tropism, and pathogenesis and to test drugs and vaccines against naturally acquired VL. PMID:23288926
Family leader empowerment program using participatory learning process for dengue vector control.
Pengvanich, Veerapong
2011-02-01
Assess the performance of the empowerment program using participatory learning process for the control of Dengue vector The program focuses on using the leaders of families as the main executer of the vector control protocol. This quasi-experimental research utilized the two-group pretest-posttest design. The sample group consisted of 120 family leaders from two communities in Mueang Municipality, Chachoengsao Province. The research was conducted during an 8-week period between April and June 2010. The data were collected and analyzed based on frequency, percentage, mean, paired t-test, and independent t-test. The result was evaluated by comparing the difference between the mean prevalence index of mosquito larvae before and after the process implementation in terms of the container index (CI) and the house index (HI). After spending eight weeks in the empowerment program, the family leader's behavior in the aspect of Dengue vector control has improved. The Container Index and the House Index were found to decrease with p = 0.05 statistical significance. The reduction of CI and HI suggested that the program worked well in the selected communities. The success of the Dengue vector control program depended on cooperation and participation of many groups, especially the families in the community When the family leaders have good attitude and are capable of carrying out the vector control protocol, the risk factor leading to the incidence of Dengue rims infection can be reduced.
Xia, Wenjun; Mita, Yoshio; Shibata, Tadashi
2016-05-01
Aiming at efficient data condensation and improving accuracy, this paper presents a hardware-friendly template reduction (TR) method for the nearest neighbor (NN) classifiers by introducing the concept of critical boundary vectors. A hardware system is also implemented to demonstrate the feasibility of using an field-programmable gate array (FPGA) to accelerate the proposed method. Initially, k -means centers are used as substitutes for the entire template set. Then, to enhance the classification performance, critical boundary vectors are selected by a novel learning algorithm, which is completed within a single iteration. Moreover, to remove noisy boundary vectors that can mislead the classification in a generalized manner, a global categorization scheme has been explored and applied to the algorithm. The global characterization automatically categorizes each classification problem and rapidly selects the boundary vectors according to the nature of the problem. Finally, only critical boundary vectors and k -means centers are used as the new template set for classification. Experimental results for 24 data sets show that the proposed algorithm can effectively reduce the number of template vectors for classification with a high learning speed. At the same time, it improves the accuracy by an average of 2.17% compared with the traditional NN classifiers and also shows greater accuracy than seven other TR methods. We have shown the feasibility of using a proof-of-concept FPGA system of 256 64-D vectors to accelerate the proposed method on hardware. At a 50-MHz clock frequency, the proposed system achieves a 3.86 times higher learning speed than on a 3.4-GHz PC, while consuming only 1% of the power of that used by the PC.
NASA Astrophysics Data System (ADS)
Bechstein, S.; Petsche, F.; Scheiner, M.; Drung, D.; Thiel, F.; Schnabel, A.; Schurig, Th
2006-06-01
Recently, we have developed a family of dc superconducting quantum interference device (SQUID) readout electronics for several applications. These electronics comprise a low-noise preamplifier followed by an integrator, and an analog SQUID bias circuit. A highly-compact low-power version with a flux-locked loop bandwidth of 0.3 MHz and a white noise level of 1 nV/√Hz was specially designed for a 304-channel low-Tc dc SQUID vector magnetometer, intended to operate in the new Berlin Magnetically Shielded Room (BMSR-2). In order to minimize the space needed to mount the electronics on top of the dewar and to minimize the power consumption, we have integrated four electronics channels on one 3 cm × 10 cm sized board. Furthermore we embedded the analog components of these four channels into a digitally controlled system including an in-system programmable microcontroller. Four of these integrated boards were combined to one module with a size of 4 cm × 4 cm × 16 cm. 19 of these modules were implemented, resulting in a total power consumption of about 61 W. To initialize the 304 channels and to service the system we have developed software tools running on a laptop computer. By means of these software tools the microcontrollers are fed with all required data such as the working points, the characteristic parameters of the sensors (noise, voltage swing), or the sensor position inside of the vector magnetometer system. In this paper, the developed electronics including the software tools are described, and first results are presented.
Wang, Hui; Qin, Feng; Ruan, Liu; Wang, Rui; Liu, Qi; Ma, Zhanhong; Li, Xiaolong; Cheng, Pei; Wang, Haiguang
2016-01-01
It is important to implement detection and assessment of plant diseases based on remotely sensed data for disease monitoring and control. Hyperspectral data of healthy leaves, leaves in incubation period and leaves in diseased period of wheat stripe rust and wheat leaf rust were collected under in-field conditions using a black-paper-based measuring method developed in this study. After data preprocessing, the models to identify the diseases were built using distinguished partial least squares (DPLS) and support vector machine (SVM), and the disease severity inversion models of stripe rust and the disease severity inversion models of leaf rust were built using quantitative partial least squares (QPLS) and support vector regression (SVR). All the models were validated by using leave-one-out cross validation and external validation. The diseases could be discriminated using both distinguished partial least squares and support vector machine with the accuracies of more than 99%. For each wheat rust, disease severity levels were accurately retrieved using both the optimal QPLS models and the optimal SVR models with the coefficients of determination (R2) of more than 0.90 and the root mean square errors (RMSE) of less than 0.15. The results demonstrated that identification and severity evaluation of stripe rust and leaf rust at the leaf level could be implemented based on the hyperspectral data acquired using the developed method. A scientific basis was provided for implementing disease monitoring by using aerial and space remote sensing technologies.
Ruan, Liu; Wang, Rui; Liu, Qi; Ma, Zhanhong; Li, Xiaolong; Cheng, Pei; Wang, Haiguang
2016-01-01
It is important to implement detection and assessment of plant diseases based on remotely sensed data for disease monitoring and control. Hyperspectral data of healthy leaves, leaves in incubation period and leaves in diseased period of wheat stripe rust and wheat leaf rust were collected under in-field conditions using a black-paper-based measuring method developed in this study. After data preprocessing, the models to identify the diseases were built using distinguished partial least squares (DPLS) and support vector machine (SVM), and the disease severity inversion models of stripe rust and the disease severity inversion models of leaf rust were built using quantitative partial least squares (QPLS) and support vector regression (SVR). All the models were validated by using leave-one-out cross validation and external validation. The diseases could be discriminated using both distinguished partial least squares and support vector machine with the accuracies of more than 99%. For each wheat rust, disease severity levels were accurately retrieved using both the optimal QPLS models and the optimal SVR models with the coefficients of determination (R2) of more than 0.90 and the root mean square errors (RMSE) of less than 0.15. The results demonstrated that identification and severity evaluation of stripe rust and leaf rust at the leaf level could be implemented based on the hyperspectral data acquired using the developed method. A scientific basis was provided for implementing disease monitoring by using aerial and space remote sensing technologies. PMID:27128464
Betti numbers of graded modules and cohomology of vector bundles
NASA Astrophysics Data System (ADS)
Eisenbud, David; Schreyer, Frank-Olaf
2009-07-01
In the remarkable paper Graded Betti numbers of Cohen-Macaulay modules and the multiplicity conjecture, Mats Boij and Jonas Soederberg conjectured that the Betti table of a Cohen-Macaulay module over a polynomial ring is a positive linear combination of Betti tables of modules with pure resolutions. We prove a strengthened form of their conjectures. Applications include a proof of the Multiplicity Conjecture of Huneke and Srinivasan and a proof of the convexity of a fan naturally associated to the Young lattice. With the same tools we show that the cohomology table of any vector bundle on projective space is a positive rational linear combination of the cohomology tables of what we call supernatural vector bundles. Using this result we give new bounds on the slope of a vector bundle in terms of its cohomology.