Sample records for space vector approach

  1. A vector space model approach to identify genetically related diseases.

    PubMed

    Sarkar, Indra Neil

    2012-01-01

    The relationship between diseases and their causative genes can be complex, especially in the case of polygenic diseases. Further exacerbating the challenges in their study is that many genes may be causally related to multiple diseases. This study explored the relationship between diseases through the adaptation of an approach pioneered in the context of information retrieval: vector space models. A vector space model approach was developed that bridges gene disease knowledge inferred across three knowledge bases: Online Mendelian Inheritance in Man, GenBank, and Medline. The approach was then used to identify potentially related diseases for two target diseases: Alzheimer disease and Prader-Willi Syndrome. In the case of both Alzheimer Disease and Prader-Willi Syndrome, a set of plausible diseases were identified that may warrant further exploration. This study furthers seminal work by Swanson, et al. that demonstrated the potential for mining literature for putative correlations. Using a vector space modeling approach, information from both biomedical literature and genomic resources (like GenBank) can be combined towards identification of putative correlations of interest. To this end, the relevance of the predicted diseases of interest in this study using the vector space modeling approach were validated based on supporting literature. The results of this study suggest that a vector space model approach may be a useful means to identify potential relationships between complex diseases, and thereby enable the coordination of gene-based findings across multiple complex diseases.

  2. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  3. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE PAGES

    None, None

    2016-11-21

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  4. New Term Weighting Formulas for the Vector Space Method in Information Retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chisholm, E.; Kolda, T.G.

    The goal in information retrieval is to enable users to automatically and accurately find data relevant to their queries. One possible approach to this problem i use the vector space model, which models documents and queries as vectors in the term space. The components of the vectors are determined by the term weighting scheme, a function of the frequencies of the terms in the document or query as well as throughout the collection. We discuss popular term weighting schemes and present several new schemes that offer improved performance.

  5. Manifolds for pose tracking from monocular video

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2015-03-01

    We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).

  6. Effective Numerical Methods for Solving Elliptical Problems in Strengthened Sobolev Spaces

    NASA Technical Reports Server (NTRS)

    D'yakonov, Eugene G.

    1996-01-01

    Fourth-order elliptic boundary value problems in the plane can be reduced to operator equations in Hilbert spaces G that are certain subspaces of the Sobolev space W(sub 2)(exp 2)(Omega) is identical with G(sup (2)). Appearance of asymptotically optimal algorithms for Stokes type problems made it natural to focus on an approach that considers rot w is identical with (D(sub 2)w - D(sub 1)w) is identical with vector of u as a new unknown vector-function, which automatically satisfies the condition div vector of u = 0. In this work, we show that this approach can also be developed for an important class of problems from the theory of plates and shells with stiffeners. The main mathematical problem was to show that the well-known inf-sup condition (normal solvability of the divergence operator) holds for special Hilbert spaces. This result is also essential for certain hydrodynamics problems.

  7. Time-based self-spacing techniques using cockpit display of traffic information during approach to landing in a terminal area vectoring environment

    NASA Technical Reports Server (NTRS)

    Williams, D. H.

    1983-01-01

    A simulation study was undertaken to evaluate two time-based self-spacing techniques for in-trail following during terminal area approach. An electronic traffic display was provided in the weather radarscope location. The displayed self-spacing cues allowed the simulated aircraft to follow and to maintain spacing on another aircraft which was being vectored by air traffic control (ATC) for landing in a high-density terminal area. Separation performance data indicate the information provided on the traffic display was adequate for the test subjects to accurately follow the approach path of another aircraft without the assistance of ATC. The time-based technique with a constant-delay spacing criterion produced the most satisfactory spacing performance. Pilot comments indicate the workload associated with the self-separation task was very high and that additional spacing command information and/or aircraft autopilot functions would be desirable for operational implementational of the self-spacing task.

  8. A Vector Approach to Euclidean Geometry: Inner Product Spaces, Euclidean Geometry and Trigonometry, Volume 2. Teacher's Edition.

    ERIC Educational Resources Information Center

    Vaughan, Herbert E.; Szabo, Steven

    This is the teacher's edition of a text for the second year of a two-year high school geometry course. The course bases plane and solid geometry and trigonometry on the fact that the translations of a Euclidean space constitute a vector space which has an inner product. Congruence is a geometric topic reserved for Volume 2. Volume 2 opens with an…

  9. The next 25 years: Industrialization of space - Rationale for planning

    NASA Technical Reports Server (NTRS)

    Von Puttkamer, J.

    1977-01-01

    A methodology for planning the industralization of space is discussed. The suggested approach combines the extrapolative ('push') approach, in which alternative futures are projected on the basis of past and current trends and tendencies, with the normative ('pull') view, in which an ideal state in the far future is postulated and policies and decisions are directed toward its attainment. Time-reversed vectors of the future are tied to extrapolated, trend-oriented vectors of the quasi-present to identify common plateaus or stepping stones in technological development. Important steps in the industrialization of space to attain the short-range goals of production of space-derived energy, goods and services and the long-range goal of space colonization are discussed.

  10. Fast metabolite identification with Input Output Kernel Regression.

    PubMed

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-06-15

    An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. celine.brouard@aalto.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  11. Fast metabolite identification with Input Output Kernel Regression

    PubMed Central

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-01-01

    Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628

  12. Cross-entropy embedding of high-dimensional data using the neural gas model.

    PubMed

    Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi

    2005-01-01

    A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).

  13. Use of Mapping and Spatial and Space-Time Modeling Approaches in Operational Control of Aedes aegypti and Dengue

    PubMed Central

    Eisen, Lars; Lozano-Fuentes, Saul

    2009-01-01

    The aims of this review paper are to 1) provide an overview of how mapping and spatial and space-time modeling approaches have been used to date to visualize and analyze mosquito vector and epidemiologic data for dengue; and 2) discuss the potential for these approaches to be included as routine activities in operational vector and dengue control programs. Geographical information system (GIS) software are becoming more user-friendly and now are complemented by free mapping software that provide access to satellite imagery and basic feature-making tools and have the capacity to generate static maps as well as dynamic time-series maps. Our challenge is now to move beyond the research arena by transferring mapping and GIS technologies and spatial statistical analysis techniques in user-friendly packages to operational vector and dengue control programs. This will enable control programs to, for example, generate risk maps for exposure to dengue virus, develop Priority Area Classifications for vector control, and explore socioeconomic associations with dengue risk. PMID:19399163

  14. Multiclass Reduced-Set Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Tang, Benyang; Mazzoni, Dominic

    2006-01-01

    There are well-established methods for reducing the number of support vectors in a trained binary support vector machine, often with minimal impact on accuracy. We show how reduced-set methods can be applied to multiclass SVMs made up of several binary SVMs, with significantly better results than reducing each binary SVM independently. Our approach is based on Burges' approach that constructs each reduced-set vector as the pre-image of a vector in kernel space, but we extend this by recomputing the SVM weights and bias optimally using the original SVM objective function. This leads to greater accuracy for a binary reduced-set SVM, and also allows vectors to be 'shared' between multiple binary SVMs for greater multiclass accuracy with fewer reduced-set vectors. We also propose computing pre-images using differential evolution, which we have found to be more robust than gradient descent alone. We show experimental results on a variety of problems and find that this new approach is consistently better than previous multiclass reduced-set methods, sometimes with a dramatic difference.

  15. Exclusive vector meson production with leading neutrons in a saturation model for the dipole amplitude in mixed space

    NASA Astrophysics Data System (ADS)

    Amaral, J. T.; Becker, V. M.

    2018-05-01

    We investigate ρ vector meson production in e p collisions at HERA with leading neutrons in the dipole formalism. The interaction of the dipole and the pion is described in a mixed-space approach, in which the dipole-pion scattering amplitude is given by the Marquet-Peschanski-Soyez saturation model, which is based on the traveling wave solutions of the nonlinear Balitsky-Kovchegov equation. We estimate the magnitude of the absorption effects and compare our results with a previous analysis of the same process in full coordinate space. In contrast with this approach, the present study leads to absorption K factors in the range of those predicted by previous theoretical studies on semi-inclusive processes.

  16. Interacting vector fields in relativity without relativity

    NASA Astrophysics Data System (ADS)

    Anderson, Edward; Barbour, Julian

    2002-06-01

    Barbour, Foster and Ó Murchadha have recently developed a new framework, called here the 3-space approach, for the formulation of classical bosonic dynamics. Neither time nor a locally Minkowskian structure of spacetime are presupposed. Both arise as emergent features of the world from geodesic-type dynamics on a space of three-dimensional metric-matter configurations. In fact gravity, the universal light-cone and Abelian gauge theory minimally coupled to gravity all arise naturally through a single common mechanism. It yields relativity - and more - without presupposing relativity. This paper completes the recovery of the presently known bosonic sector within the 3-space approach. We show, for a rather general ansatz, that 3-vector fields can interact among themselves only as Yang-Mills fields minimally coupled to gravity.

  17. Anisotropic fractal media by vector calculus in non-integer dimensional space

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2014-08-01

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.

  18. Discrete Fourier Transform Analysis in a Complex Vector Space

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2009-01-01

    Alternative computational strategies for the Discrete Fourier Transform (DFT) have been developed using analysis of geometric manifolds. This approach provides a general framework for performing DFT calculations, and suggests a more efficient implementation of the DFT for applications using iterative transform methods, particularly phase retrieval. The DFT can thus be implemented using fewer operations when compared to the usual DFT counterpart. The software decreases the run time of the DFT in certain applications such as phase retrieval that iteratively call the DFT function. The algorithm exploits a special computational approach based on analysis of the DFT as a transformation in a complex vector space. As such, this approach has the potential to realize a DFT computation that approaches N operations versus Nlog(N) operations for the equivalent Fast Fourier Transform (FFT) calculation.

  19. A simple map-based localization strategy using range measurements

    NASA Astrophysics Data System (ADS)

    Moore, Kevin L.; Kutiyanawala, Aliasgar; Chandrasekharan, Madhumita

    2005-05-01

    In this paper we present a map-based approach to localization. We consider indoor navigation in known environments based on the idea of a "vector cloud" by observing that any point in a building has an associated vector defining its distance to the key structural components (e.g., walls, ceilings, etc.) of the building in any direction. Given a building blueprint we can derive the "ideal" vector cloud at any point in space. Then, given measurements from sensors on the robot we can compare the measured vector cloud to the possible vector clouds cataloged from the blueprint, thus determining location. We present algorithms for implementing this approach to localization, using the Hamming norm, the 1-norm, and the 2-norm. The effectiveness of the approach is verified by experiments on a 2-D testbed using a mobile robot with a 360° laser range-finder and through simulation analysis of robustness.

  20. An alternative subspace approach to EEG dipole source localization

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  1. Using the Logarithm of Odds to Define a Vector Space on Probabilistic Atlases

    PubMed Central

    Pohl, Kilian M.; Fisher, John; Bouix, Sylvain; Shenton, Martha; McCarley, Robert W.; Grimson, W. Eric L.; Kikinis, Ron; Wells, William M.

    2007-01-01

    The Logarithm of the Odds ratio (LogOdds) is frequently used in areas such as artificial neural networks, economics, and biology, as an alternative representation of probabilities. Here, we use LogOdds to place probabilistic atlases in a linear vector space. This representation has several useful properties for medical imaging. For example, it not only encodes the shape of multiple anatomical structures but also captures some information concerning uncertainty. We demonstrate that the resulting vector space operations of addition and scalar multiplication have natural probabilistic interpretations. We discuss several examples for placing label maps into the space of LogOdds. First, we relate signed distance maps, a widely used implicit shape representation, to LogOdds and compare it to an alternative that is based on smoothing by spatial Gaussians. We find that the LogOdds approach better preserves shapes in a complex multiple object setting. In the second example, we capture the uncertainty of boundary locations by mapping multiple label maps of the same object into the LogOdds space. Third, we define a framework for non-convex interpolations among atlases that capture different time points in the aging process of a population. We evaluate the accuracy of our representation by generating a deformable shape atlas that captures the variations of anatomical shapes across a population. The deformable atlas is the result of a principal component analysis within the LogOdds space. This atlas is integrated into an existing segmentation approach for MR images. We compare the performance of the resulting implementation in segmenting 20 test cases to a similar approach that uses a more standard shape model that is based on signed distance maps. On this data set, the Bayesian classification model with our new representation outperformed the other approaches in segmenting subcortical structures. PMID:17698403

  2. Thrust vector control using electric actuation

    NASA Astrophysics Data System (ADS)

    Bechtel, Robert T.; Hall, David K.

    1995-01-01

    Presently, gimbaling of launch vehicle engines for thrust vector control is generally accomplished using a hydraulic system. In the case of the space shuttle solid rocket boosters and main engines, these systems are powered by hydrazine auxiliary power units. Use of electromechanical actuators would provide significant advantages in cost and maintenance. However, present energy source technologies such as batteries are heavy to the point of causing significant weight penalties. Utilizing capacitor technology developed by the Auburn University Space Power Institute in collaboration with the Auburn CCDS, Marshall Space Flight Center (MSFC) and Auburn are developing EMA system components with emphasis on high discharge rate energy sources compatible with space shuttle type thrust vector control requirements. Testing has been done at MSFC as part of EMA system tests with loads up to 66000 newtons for pulse times of several seconds. Results show such an approach to be feasible providing a potential for reduced weight and operations costs for new launch vehicles.

  3. 3D Position and Velocity Vector Computations of Objects Jettisoned from the International Space Station Using Close-Range Photogrammetry Approach

    NASA Technical Reports Server (NTRS)

    Papanyan, Valeri; Oshle, Edward; Adamo, Daniel

    2008-01-01

    Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.

  4. Product demand forecasts using wavelet kernel support vector machine and particle swarm optimization in manufacture system

    NASA Astrophysics Data System (ADS)

    Wu, Qi

    2010-03-01

    Demand forecasts play a crucial role in supply chain management. The future demand for a certain product is the basis for the respective replenishment systems. Aiming at demand series with small samples, seasonal character, nonlinearity, randomicity and fuzziness, the existing support vector kernel does not approach the random curve of the sales time series in the space (quadratic continuous integral space). In this paper, we present a hybrid intelligent system combining the wavelet kernel support vector machine and particle swarm optimization for demand forecasting. The results of application in car sale series forecasting show that the forecasting approach based on the hybrid PSOWv-SVM model is effective and feasible, the comparison between the method proposed in this paper and other ones is also given, which proves that this method is, for the discussed example, better than hybrid PSOv-SVM and other traditional methods.

  5. Anisotropic fractal media by vector calculus in non-integer dimensional space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarasov, Vasily E., E-mail: tarasov@theory.sinp.msu.ru

    2014-08-15

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensionalmore » space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.« less

  6. A geometric approach to problems in birational geometry.

    PubMed

    Chi, Chen-Yu; Yau, Shing-Tung

    2008-12-02

    A classical set of birational invariants of a variety are its spaces of pluricanonical forms and some of their canonically defined subspaces. Each of these vector spaces admits a typical metric structure which is also birationally invariant. These vector spaces so metrized will be referred to as the pseudonormed spaces of the original varieties. A fundamental question is the following: Given two mildly singular projective varieties with some of the first variety's pseudonormed spaces being isometric to the corresponding ones of the second variety's, can one construct a birational map between them that induces these isometries? In this work, a positive answer to this question is given for varieties of general type. This can be thought of as a theorem of Torelli type for birational equivalence.

  7. Climate impacts on environmental risks evaluated from space: a conceptual approach to the case of Rift Valley Fever in Senegal.

    PubMed

    Tourre, Yves M; Lacaux, Jean-Pierre; Vignolles, Cécile; Lafaye, Murielle

    2009-11-11

    Climate and environment vary across many spatio-temporal scales, including the concept of climate change, which impact on ecosystems, vector-borne diseases and public health worldwide. To develop a conceptual approach by mapping climatic and environmental conditions from space and studying their linkages with Rift Valley Fever (RVF) epidemics in Senegal. Ponds in which mosquitoes could thrive were identified from remote sensing using high-resolution SPOT-5 satellite images. Additional data on pond dynamics and rainfall events (obtained from the Tropical Rainfall Measuring Mission) were combined with hydrological in-situ data. Localisation of vulnerable hosts such as penned cattle (from QuickBird satellite) were also used. Dynamic spatio-temporal distribution of Aedes vexans density (one of the main RVF vectors) is based on the total rainfall amount and ponds' dynamics. While Zones Potentially Occupied by Mosquitoes are mapped, detailed risk areas, i.e. zones where hazards and vulnerability occur, are expressed in percentages of areas where cattle are potentially exposed to mosquitoes' bites. This new conceptual approach, using precise remote-sensing techniques, simply relies upon rainfall distribution also evaluated from space. It is meant to contribute to the implementation of operational early warning systems for RVF based on both natural and anthropogenic climatic and environmental changes. In a climate change context, this approach could also be applied to other vector-borne diseases and places worldwide.

  8. The canonical Lagrangian approach to three-space general relativity

    NASA Astrophysics Data System (ADS)

    Shyam, Vasudev; Venkatesh, Madhavan

    2013-07-01

    We study the action for the three-space formalism of general relativity, better known as the Barbour-Foster-Ó Murchadha action, which is a square-root Baierlein-Sharp-Wheeler action. In particular, we explore the (pre)symplectic structure by pulling it back via a Legendre map to the tangent bundle of the configuration space of this action. With it we attain the canonical Lagrangian vector field which generates the gauge transformations (3-diffeomorphisms) and the true physical evolution of the system. This vector field encapsulates all the dynamics of the system. We also discuss briefly the observables and perennials for this theory. We then present a symplectic reduction of the constrained phase space.

  9. Customer demand prediction of service-oriented manufacturing using the least square support vector machine optimized by particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Jin; Jiang, Zhibin; Wang, Kangzhou

    2017-07-01

    Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.

  10. A Bag of Concepts Approach for Biomedical Document Classification Using Wikipedia Knowledge.

    PubMed

    Mouriño-García, Marcos A; Pérez-Rodríguez, Roberto; Anido-Rifón, Luis E

    2017-01-01

    The ability to efficiently review the existing literature is essential for the rapid progress of research. This paper describes a classifier of text documents, represented as vectors in spaces of Wikipedia concepts, and analyses its suitability for classification of Spanish biomedical documents when only English documents are available for training. We propose the cross-language concept matching (CLCM) technique, which relies on Wikipedia interlanguage links to convert concept vectors from the Spanish to the English space. The performance of the classifier is compared to several baselines: a classifier based on machine translation, a classifier that represents documents after performing Explicit Semantic Analysis (ESA), and a classifier that uses a domain-specific semantic an- notator (MetaMap). The corpus used for the experiments (Cross-Language UVigoMED) was purpose-built for this study, and it is composed of 12,832 English and 2,184 Spanish MEDLINE abstracts. The performance of our approach is superior to any other state-of-the art classifier in the benchmark, with performance increases up to: 124% over classical machine translation, 332% over MetaMap, and 60 times over the classifier based on ESA. The results have statistical significance, showing p-values < 0.0001. Using knowledge mined from Wikipedia to represent documents as vectors in a space of Wikipedia concepts and translating vectors between language-specific concept spaces, a cross-language classifier can be built, and it performs better than several state-of-the-art classifiers. Schattauer GmbH.

  11. A Bag of Concepts Approach for Biomedical Document Classification Using Wikipedia Knowledge*. Spanish-English Cross-language Case Study.

    PubMed

    Mouriño-García, Marcos A; Pérez-Rodríguez, Roberto; Anido-Rifón, Luis E

    2017-10-26

    The ability to efficiently review the existing literature is essential for the rapid progress of research. This paper describes a classifier of text documents, represented as vectors in spaces of Wikipedia concepts, and analyses its suitability for classification of Spanish biomedical documents when only English documents are available for training. We propose the cross-language concept matching (CLCM) technique, which relies on Wikipedia interlanguage links to convert concept vectors from the Spanish to the English space. The performance of the classifier is compared to several baselines: a classifier based on machine translation, a classifier that represents documents after performing Explicit Semantic Analysis (ESA), and a classifier that uses a domain-specific semantic annotator (MetaMap). The corpus used for the experiments (Cross-Language UVigoMED) was purpose-built for this study, and it is composed of 12,832 English and 2,184 Spanish MEDLINE abstracts. The performance of our approach is superior to any other state-of-the art classifier in the benchmark, with performance increases up to: 124% over classical machine translation, 332% over MetaMap, and 60 times over the classifier based on ESA. The results have statistical significance, showing p-values < 0.0001. Using knowledge mined from Wikipedia to represent documents as vectors in a space of Wikipedia concepts and translating vectors between language-specific concept spaces, a cross-language classifier can be built, and it performs better than several state-of-the-art classifiers.

  12. Climate impacts on environmental risks evaluated from space: a conceptual approach to the case of Rift Valley Fever in Senegal

    PubMed Central

    Tourre, Yves M.; Lacaux, Jean-Pierre; Vignolles, Cécile; Lafaye, Murielle

    2009-01-01

    Background Climate and environment vary across many spatio-temporal scales, including the concept of climate change, which impact on ecosystems, vector-borne diseases and public health worldwide. Objectives To develop a conceptual approach by mapping climatic and environmental conditions from space and studying their linkages with Rift Valley Fever (RVF) epidemics in Senegal. Design Ponds in which mosquitoes could thrive were identified from remote sensing using high-resolution SPOT-5 satellite images. Additional data on pond dynamics and rainfall events (obtained from the Tropical Rainfall Measuring Mission) were combined with hydrological in-situ data. Localisation of vulnerable hosts such as penned cattle (from QuickBird satellite) were also used. Results Dynamic spatio-temporal distribution of Aedes vexans density (one of the main RVF vectors) is based on the total rainfall amount and ponds’ dynamics. While Zones Potentially Occupied by Mosquitoes are mapped, detailed risk areas, i.e. zones where hazards and vulnerability occur, are expressed in percentages of areas where cattle are potentially exposed to mosquitoes’ bites. Conclusions This new conceptual approach, using precise remote-sensing techniques, simply relies upon rainfall distribution also evaluated from space. It is meant to contribute to the implementation of operational early warning systems for RVF based on both natural and anthropogenic climatic and environmental changes. In a climate change context, this approach could also be applied to other vector-borne diseases and places worldwide. PMID:20052381

  13. Some Applications Of Semigroups And Computer Algebra In Discrete Structures

    NASA Astrophysics Data System (ADS)

    Bijev, G.

    2009-11-01

    An algebraic approach to the pseudoinverse generalization problem in Boolean vector spaces is used. A map (p) is defined, which is similar to an orthogonal projection in linear vector spaces. Some other important maps with properties similar to those of the generalized inverses (pseudoinverses) of linear transformations and matrices corresponding to them are also defined and investigated. Let Ax = b be an equation with matrix A and vectors x and b Boolean. Stochastic experiments for solving the equation, which involves the maps defined and use computer algebra methods, have been made. As a result, the Hamming distance between vectors Ax = p(b) and b is equal or close to the least possible. We also share our experience in using computer algebra systems for teaching discrete mathematics and linear algebra and research. Some examples for computations with binary relations using Maple are given.

  14. Covariantized vector Galileons

    NASA Astrophysics Data System (ADS)

    Hull, Matthew; Koyama, Kazuya; Tasinato, Gianmassimo

    2016-03-01

    Vector Galileons are ghost-free systems containing higher derivative interactions of vector fields. They break the vector gauge symmetry, and the dynamics of the longitudinal vector polarizations acquire a Galileon symmetry in an appropriate decoupling limit in Minkowski space. Using an Arnowitt-Deser-Misner approach, we carefully reconsider the coupling with gravity of vector Galileons, with the aim of studying the necessary conditions to avoid the propagation of ghosts. We develop arguments that put on a more solid footing the results previously obtained in the literature. Moreover, working in analogy with the scalar counterpart, we find indications for the existence of a "beyond Horndeski" theory involving vector degrees of freedom that avoids the propagation of ghosts thanks to secondary constraints. In addition, we analyze a Higgs mechanism for generating vector Galileons through spontaneous symmetry breaking, and we present its consistent covariantization.

  15. Estimation of the chemical rank for the three-way data: a principal norm vector orthogonal projection approach.

    PubMed

    Hong-Ping, Xie; Jian-Hui, Jiang; Guo-Li, Shen; Ru-Qin, Yu

    2002-01-01

    A new approach for estimating the chemical rank of the three-way array called the principal norm vector orthogonal projection method has been proposed. The method is based on the fact that the chemical rank of the three-way data array is equal to one of the column space of the unfolded matrix along the spectral or chromatographic mode. A vector with maximum Frobenius norm is selected among all the column vectors of the unfolded matrix as the principal norm vector (PNV). A transformation is conducted for the column vectors with an orthogonal projection matrix formulated by PNV. The mathematical rank of the column space of the residual matrix thus obtained should decrease by one. Such orthogonal projection is carried out repeatedly till the contribution of chemical species to the signal data is all deleted. At this time the decrease of the mathematical rank would equal that of the chemical rank, and the remaining residual subspace would entirely be due to the noise contribution. The chemical rank can be estimated easily by using an F-test. The method has been used successfully to the simulated HPLC-DAD type three-way data array and two real excitation-emission fluorescence data sets of amino acid mixtures and dye mixtures. The simulation with added relatively high level noise shows that the method is robust in resisting the heteroscedastic noise. The proposed algorithm is simple and easy to program with quite light computational burden.

  16. Modal vector estimation for closely spaced frequency modes

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.; Chung, Y. T.; Blair, M.

    1982-01-01

    Techniques for obtaining improved modal vector estimates for systems with closely spaced frequency modes are discussed. In describing the dynamical behavior of a complex structure modal parameters are often analyzed: undamped natural frequency, mode shape, modal mass, modal stiffness and modal damping. From both an analytical standpoint and an experimental standpoint, identification of modal parameters is more difficult if the system has repeated frequencies or even closely spaced frequencies. The more complex the structure, the more likely it is to have closely spaced frequencies. This makes it difficult to determine valid mode shapes using single shaker test methods. By employing band selectable analysis (zoom) techniques and by employing Kennedy-Pancu circle fitting or some multiple degree of freedom (MDOF) curve fit procedure, the usefulness of the single shaker approach can be extended.

  17. Multiple-output support vector machine regression with feature selection for arousal/valence space emotion assessment.

    PubMed

    Torres-Valencia, Cristian A; Álvarez, Mauricio A; Orozco-Gutiérrez, Alvaro A

    2014-01-01

    Human emotion recognition (HER) allows the assessment of an affective state of a subject. Until recently, such emotional states were described in terms of discrete emotions, like happiness or contempt. In order to cover a high range of emotions, researchers in the field have introduced different dimensional spaces for emotion description that allow the characterization of affective states in terms of several variables or dimensions that measure distinct aspects of the emotion. One of the most common of such dimensional spaces is the bidimensional Arousal/Valence space. To the best of our knowledge, all HER systems so far have modelled independently, the dimensions in these dimensional spaces. In this paper, we study the effect of modelling the output dimensions simultaneously and show experimentally the advantages in modeling them in this way. We consider a multimodal approach by including features from the Electroencephalogram and a few physiological signals. For modelling the multiple outputs, we employ a multiple output regressor based on support vector machines. We also include an stage of feature selection that is developed within an embedded approach known as Recursive Feature Elimination (RFE), proposed initially for SVM. The results show that several features can be eliminated using the multiple output support vector regressor with RFE without affecting the performance of the regressor. From the analysis of the features selected in smaller subsets via RFE, it can be observed that the signals that are more informative into the arousal and valence space discrimination are the EEG, Electrooculogram/Electromiogram (EOG/EMG) and the Galvanic Skin Response (GSR).

  18. An improved approach to infer protein-protein interaction based on a hierarchical vector space model.

    PubMed

    Zhang, Jiongmin; Jia, Ke; Jia, Jinmeng; Qian, Ying

    2018-04-27

    Comparing and classifying functions of gene products are important in today's biomedical research. The semantic similarity derived from the Gene Ontology (GO) annotation has been regarded as one of the most widely used indicators for protein interaction. Among the various approaches proposed, those based on the vector space model are relatively simple, but their effectiveness is far from satisfying. We propose a Hierarchical Vector Space Model (HVSM) for computing semantic similarity between different genes or their products, which enhances the basic vector space model by introducing the relation between GO terms. Besides the directly annotated terms, HVSM also takes their ancestors and descendants related by "is_a" and "part_of" relations into account. Moreover, HVSM introduces the concept of a Certainty Factor to calibrate the semantic similarity based on the number of terms annotated to genes. To assess the performance of our method, we applied HVSM to Homo sapiens and Saccharomyces cerevisiae protein-protein interaction datasets. Compared with TCSS, Resnik, and other classic similarity measures, HVSM achieved significant improvement for distinguishing positive from negative protein interactions. We also tested its correlation with sequence, EC, and Pfam similarity using online tool CESSM. HVSM showed an improvement of up to 4% compared to TCSS, 8% compared to IntelliGO, 12% compared to basic VSM, 6% compared to Resnik, 8% compared to Lin, 11% compared to Jiang, 8% compared to Schlicker, and 11% compared to SimGIC using AUC scores. CESSM test showed HVSM was comparable to SimGIC, and superior to all other similarity measures in CESSM as well as TCSS. Supplementary information and the software are available at https://github.com/kejia1215/HVSM .

  19. A Feature Mining Based Approach for the Classification of Text Documents into Disjoint Classes.

    ERIC Educational Resources Information Center

    Nieto Sanchez, Salvador; Triantaphyllou, Evangelos; Kraft, Donald

    2002-01-01

    Proposes a new approach for classifying text documents into two disjoint classes. Highlights include a brief overview of document clustering; a data mining approach called the One Clause at a Time (OCAT) algorithm which is based on mathematical logic; vector space model (VSM); and comparing the OCAT to the VSM. (Author/LRW)

  20. The organization of conspecific face space in nonhuman primates

    PubMed Central

    Parr, Lisa A.; Taubert, Jessica; Little, Anthony C.; Hancock, Peter J. B.

    2013-01-01

    Humans and chimpanzees demonstrate numerous cognitive specializations for processing faces, but comparative studies with monkeys suggest that these may be the result of recent evolutionary adaptations. The present study utilized the novel approach of face space, a powerful theoretical framework used to understand the representation of face identity in humans, to further explore species differences in face processing. According to the theory, faces are represented by vectors in a multidimensional space, the centre of which is defined by an average face. Each dimension codes features important for describing a face’s identity, and vector length codes the feature’s distinctiveness. Chimpanzees and rhesus monkeys discriminated male and female conspecifics’ faces, rated by humans for their distinctiveness, using a computerized task. Multidimensional scaling analyses showed that the organization of face space was similar between humans and chimpanzees. Distinctive faces had the longest vectors and were the easiest for chimpanzees to discriminate. In contrast, distinctiveness did not correlate with the performance of rhesus monkeys. The feature dimensions for each species’ face space were visualized and described using morphing techniques. These results confirm species differences in the perceptual representation of conspecific faces, which are discussed within an evolutionary framework. PMID:22670823

  1. Resident Space Object Characterization and Behavior Understanding via Machine Learning and Ontology-based Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Furfaro, R.; Linares, R.; Gaylor, D.; Jah, M.; Walls, R.

    2016-09-01

    In this paper, we present an end-to-end approach that employs machine learning techniques and Ontology-based Bayesian Networks (BN) to characterize the behavior of resident space objects. State-of-the-Art machine learning architectures (e.g. Extreme Learning Machines, Convolutional Deep Networks) are trained on physical models to learn the Resident Space Object (RSO) features in the vectorized energy and momentum states and parameters. The mapping from measurements to vectorized energy and momentum states and parameters enables behavior characterization via clustering in the features space and subsequent RSO classification. Additionally, Space Object Behavioral Ontologies (SOBO) are employed to define and capture the domain knowledge-base (KB) and BNs are constructed from the SOBO in a semi-automatic fashion to execute probabilistic reasoning over conclusions drawn from trained classifiers and/or directly from processed data. Such an approach enables integrating machine learning classifiers and probabilistic reasoning to support higher-level decision making for space domain awareness applications. The innovation here is to use these methods (which have enjoyed great success in other domains) in synergy so that it enables a "from data to discovery" paradigm by facilitating the linkage and fusion of large and disparate sources of information via a Big Data Science and Analytics framework.

  2. Thermal noise model of antiferromagnetic dynamics: A macroscopic approach

    NASA Astrophysics Data System (ADS)

    Li, Xilai; Semenov, Yuriy; Kim, Ki Wook

    In the search for post-silicon technologies, antiferromagnetic (AFM) spintronics is receiving widespread attention. Due to faster dynamics when compared with its ferromagnetic counterpart, AFM enables ultra-fast magnetization switching and THz oscillations. A crucial factor that affects the stability of antiferromagnetic dynamics is the thermal fluctuation, rarely considered in AFM research. Here, we derive from theory both stochastic dynamic equations for the macroscopic AFM Neel vector (L-vector) and the corresponding Fokker-Plank equation for the L-vector distribution function. For the dynamic equation approach, thermal noise is modeled by a stochastic fluctuating magnetic field that affects the AFM dynamics. The field is correlated within the correlation time and the amplitude is derived from the energy dissipation theory. For the distribution function approach, the inertial behavior of AFM dynamics forces consideration of the generalized space, including both coordinates and velocities. Finally, applying the proposed thermal noise model, we analyze a particular case of L-vector reversal of AFM nanoparticles by voltage controlled perpendicular magnetic anisotropy (PMA) with a tailored pulse width. This work was supported, in part, by SRC/NRI SWAN.

  3. Recombinant Enaction: Manipulatives Generate New Procedures in the Imagination, by Extending and Recombining Action Spaces

    ERIC Educational Resources Information Center

    Rahaman, Jeenath; Agrawal, Harshit; Srivastava, Nisheeth; Chandrasekharan, Sanjay

    2018-01-01

    Manipulation of physical models such as tangrams and tiles is a popular approach to teaching early mathematics concepts. This pedagogical approach is extended by new computational media, where mathematical entities such as equations and vectors can be virtually manipulated. The cognitive and neural mechanisms supporting such manipulation-based…

  4. Ranked centroid projection: a data visualization approach with self-organizing maps.

    PubMed

    Yen, G G; Wu, Z

    2008-02-01

    The self-organizing map (SOM) is an efficient tool for visualizing high-dimensional data. In this paper, the clustering and visualization capabilities of the SOM, especially in the analysis of textual data, i.e., document collections, are reviewed and further developed. A novel clustering and visualization approach based on the SOM is proposed for the task of text mining. The proposed approach first transforms the document space into a multidimensional vector space by means of document encoding. Afterwards, a growing hierarchical SOM (GHSOM) is trained and used as a baseline structure to automatically produce maps with various levels of detail. Following the GHSOM training, the new projection method, namely the ranked centroid projection (RCP), is applied to project the input vectors to a hierarchy of 2-D output maps. The RCP is used as a data analysis tool as well as a direct interface to the data. In a set of simulations, the proposed approach is applied to an illustrative data set and two real-world scientific document collections to demonstrate its applicability.

  5. Projective formulation of Maggi's method for nonholonomic systems analysis

    NASA Astrophysics Data System (ADS)

    Blajer, Wojciech

    1992-04-01

    A projective interpretation of Maggi'a approach to dynamic analysis of nonholonomic systems is presented. Both linear and nonlinear constraint cases are treatment in unified fashion, using the language of vector spaces and tensor algebra analysis.

  6. Climate Change and Vector Borne Diseases on NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Cole, Stuart K.; DeYoung, Russell J.; Shepanek, Marc A.; Kamel, Ahmed

    2014-01-01

    Increasing global temperature, weather patterns with above average storm intensities, and higher sea levels have been identified as phenomena associated with global climate change. As a causal system, climate change could contribute to vector borne diseases in humans. Vectors of concern originate from the vicinity of Langley Research Center include mosquitos and ticks that transmit disease that originate regionally, nationwide, or from outside the US. Recognizing changing conditions, vector borne diseases propagate under climate change conditions, and understanding the conditions in which they may exist or propagate, presents opportunities for monitoring their progress and mitigating their potential impacts through communication, continued monitoring, and adaptation. Personnel comprise a direct and fundamental support to NASA mission success, continuous and improved understanding of climatic conditions, and the resulting consequence of disease from these conditions, helps to reduce risk in terrestrial space technologies, ground operations, and space research. This research addresses conditions which are attributed to climatic conditions which promote environmental conditions conducive to the increase of disease vectors. This investigation includes evaluation of local mosquito population count and rainfall data for statistical correlation and identification of planning recommendations unique to LaRC, other NASA Centers to assess adaptation approaches, Center-level planning strategies.

  7. Novel Biomarker for Evaluating Ischemic Stress Using an Electrogram Derived Phase Space

    PubMed Central

    Good, Wilson W.; Erem, Burak; Coll-Font, Jaume; Brooks, Dana H.; MacLeod, Rob S.

    2017-01-01

    The underlying pathophysiology of ischemia is poorly understood, resulting in unreliable clinical diagnosis of this disease. This limited knowledge of underlying mechanisms suggested a data driven approach, which seeks to identify patterns in the ECG data that can be linked statistically to underlying behavior and conditions of ischemic tissue. Previous studies have suggested that an approach known as Laplacian eigenmaps (LE) can identify trajectories, or manifolds, that are sensitive to different spatiotemporal consequences of ischemic stress, and thus serve as potential clinically relevant biomarkers. We applied the LE approach to measured transmural potentials in several canine preparations, recorded during control and ischemic conditions, and discovered regions on an approximated QRS-derived manifold that were sensitive to ischemia. By identifying a vector pointing to ischemia-associated changes to the manifold and measuring the shift in trajectories along that vector during ischemia, which we denote as Mshift, it was possible to also pull that vector back into signal space and determine which electrodes were responsible for driving the observed changes in the manifold. We refer to the signal space change as the manifold differential (Mdiff). Both the Mdiff and Mshift metrics show a similar degree of sensitivity to ischemic changes as standard metrics applied during the ST segment in detecting ischemic regions. The new metrics also were able to distinguish between sub-types of ischemia. Thus our results indicate that it may be possible to use the Mshift and Mdiff metrics along with ST derived metrics to determine whether tissue within the myocardium is ischemic or not. PMID:28451594

  8. Novel Biomarker for Evaluating Ischemic Stress Using an Electrogram Derived Phase Space.

    PubMed

    Good, Wilson W; Erem, Burak; Coll-Font, Jaume; Brooks, Dana H; MacLeod, Rob S

    2016-09-01

    The underlying pathophysiology of ischemia is poorly understood, resulting in unreliable clinical diagnosis of this disease. This limited knowledge of underlying mechanisms suggested a data driven approach, which seeks to identify patterns in the ECG data that can be linked statistically to underlying behavior and conditions of ischemic tissue. Previous studies have suggested that an approach known as Laplacian eigenmaps (LE) can identify trajectories, or manifolds, that are sensitive to different spatiotemporal consequences of ischemic stress, and thus serve as potential clinically relevant biomarkers. We applied the LE approach to measured transmural potentials in several canine preparations, recorded during control and ischemic conditions, and discovered regions on an approximated QRS-derived manifold that were sensitive to ischemia. By identifying a vector pointing to ischemia-associated changes to the manifold and measuring the shift in trajectories along that vector during ischemia, which we denote as Mshift, it was possible to also pull that vector back into signal space and determine which electrodes were responsible for driving the observed changes in the manifold. We refer to the signal space change as the manifold differential (Mdiff). Both the Mdiff and Mshift metrics show a similar degree of sensitivity to ischemic changes as standard metrics applied during the ST segment in detecting ischemic regions. The new metrics also were able to distinguish between sub-types of ischemia. Thus our results indicate that it may be possible to use the Mshift and Mdiff metrics along with ST derived metrics to determine whether tissue within the myocardium is ischemic or not.

  9. Development and Application of Modern Optimal Controllers for a Membrane Structure Using Vector Second Order Form

    NASA Astrophysics Data System (ADS)

    Ferhat, Ipar

    With increasing advancement in material science and computational power of current computers that allows us to analyze high dimensional systems, very light and large structures are being designed and built for aerospace applications. One example is a reflector of a space telescope that is made of membrane structures. These reflectors are light and foldable which makes the shipment easy and cheaper unlike traditional reflectors made of glass or other heavy materials. However, one of the disadvantages of membranes is that they are very sensitive to external changes, such as thermal load or maneuvering of the space telescope. These effects create vibrations that dramatically affect the performance of the reflector. To overcome vibrations in membranes, in this work, piezoelectric actuators are used to develop distributed controllers for membranes. These actuators generate bending effects to suppress the vibration. The actuators attached to a membrane are relatively thick which makes the system heterogeneous; thus, an analytical solution cannot be obtained to solve the partial differential equation of the system. Therefore, the Finite Element Model is applied to obtain an approximate solution for the membrane actuator system. Another difficulty that arises with very flexible large structures is the dimension of the discretized system. To obtain an accurate result, the system needs to be discretized using smaller segments which makes the dimension of the system very high. This issue will persist as long as the improving technology will allow increasingly complex and large systems to be designed and built. To deal with this difficulty, the analysis of the system and controller development to suppress the vibration are carried out using vector second order form as an alternative to vector first order form. In vector second order form, the number of equations that need to be solved are half of the number equations in vector first order form. Analyzing the system for control characteristics such as stability, controllability and observability is a key step that needs to be carried out before developing a controller. This analysis determines what kind of system is being modeled and the appropriate approach for controller development. Therefore, accuracy of the system analysis is very crucial. The results of the system analysis using vector second order form and vector first order form show the computational advantages of using vector second order form. Using similar concepts, LQR and LQG controllers, that are developed to suppress the vibration, are derived using vector second order form. To develop a controller using vector second order form, two different approaches are used. One is reducing the size of the Algebraic Riccati Equation to half by partitioning the solution matrix. The other approach is using the Hamiltonian method directly in vector second order form. Controllers are developed using both approaches and compared to each other. Some simple solutions for special cases are derived for vector second order form using the reduced Algebraic Riccati Equation. The advantages and drawbacks of both approaches are explained through examples. System analysis and controller applications are carried out for a square membrane system with four actuators. Two different systems with different actuator locations are analyzed. One system has the actuators at the corners of the membrane, the other has the actuators away from the corners. The structural and control effect of actuator locations are demonstrated with mode shapes and simulations. The results of the controller applications and the comparison of the vector first order form with the vector second order form demonstrate the efficacy of the controllers.

  10. Complexity of Kronecker Operations on Sparse Matrices with Applications to the Solution of Markov Models

    NASA Technical Reports Server (NTRS)

    Buchholz, Peter; Ciardo, Gianfranco; Donatelli, Susanna; Kemper, Peter

    1997-01-01

    We present a systematic discussion of algorithms to multiply a vector by a matrix expressed as the Kronecker product of sparse matrices, extending previous work in a unified notational framework. Then, we use our results to define new algorithms for the solution of large structured Markov models. In addition to a comprehensive overview of existing approaches, we give new results with respect to: (1) managing certain types of state-dependent behavior without incurring extra cost; (2) supporting both Jacobi-style and Gauss-Seidel-style methods by appropriate multiplication algorithms; (3) speeding up algorithms that consider probability vectors of size equal to the "actual" state space instead of the "potential" state space.

  11. A new local-global approach for classification.

    PubMed

    Peres, R T; Pedreira, C E

    2010-09-01

    In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.

  12. Data-driven probability concentration and sampling on manifold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu

    2016-09-15

    A new methodology is proposed for generating realizations of a random vector with values in a finite-dimensional Euclidean space that are statistically consistent with a dataset of observations of this vector. The probability distribution of this random vector, while a priori not known, is presumed to be concentrated on an unknown subset of the Euclidean space. A random matrix is introduced whose columns are independent copies of the random vector and for which the number of columns is the number of data points in the dataset. The approach is based on the use of (i) the multidimensional kernel-density estimation methodmore » for estimating the probability distribution of the random matrix, (ii) a MCMC method for generating realizations for the random matrix, (iii) the diffusion-maps approach for discovering and characterizing the geometry and the structure of the dataset, and (iv) a reduced-order representation of the random matrix, which is constructed using the diffusion-maps vectors associated with the first eigenvalues of the transition matrix relative to the given dataset. The convergence aspects of the proposed methodology are analyzed and a numerical validation is explored through three applications of increasing complexity. The proposed method is found to be robust to noise levels and data complexity as well as to the intrinsic dimension of data and the size of experimental datasets. Both the methodology and the underlying mathematical framework presented in this paper contribute new capabilities and perspectives at the interface of uncertainty quantification, statistical data analysis, stochastic modeling and associated statistical inverse problems.« less

  13. Generalized decompositions of dynamic systems and vector Lyapunov functions

    NASA Astrophysics Data System (ADS)

    Ikeda, M.; Siljak, D. D.

    1981-10-01

    The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.

  14. Observables of QCD diffraction

    NASA Astrophysics Data System (ADS)

    Mieskolainen, Mikael; Orava, Risto

    2017-03-01

    A new combinatorial vector space measurement model is introduced for soft QCD diffraction. The model independent mathematical construction resolves experimental complications; the theoretical framework of the approach includes the Good-Walker view of diffraction, Regge phenomenology together with AGK cutting rules and random fluctuations.

  15. Fundamental Principles of Classical Mechanics: a Geometrical Perspectives

    NASA Astrophysics Data System (ADS)

    Lam, Kai S.

    2014-07-01

    Classical mechanics is the quantitative study of the laws of motion for oscopic physical systems with mass. The fundamental laws of this subject, known as Newton's Laws of Motion, are expressed in terms of second-order differential equations governing the time evolution of vectors in a so-called configuration space of a system (see Chapter 12). In an elementary setting, these are usually vectors in 3-dimensional Euclidean space, such as position vectors of point particles; but typically they can be vectors in higher dimensional and more abstract spaces. A general knowledge of the mathematical properties of vectors, not only in their most intuitive incarnations as directed arrows in physical space but as elements of abstract linear vector spaces, and those of linear operators (transformations) on vector spaces as well, is then indispensable in laying the groundwork for both the physical and the more advanced mathematical - more precisely topological and geometrical - concepts that will prove to be vital in our subject. In this beginning chapter we will review these properties, and introduce the all-important related notions of dual spaces and tensor products of vector spaces. The notational convention for vectorial and tensorial indices used for the rest of this book (except when otherwise specified) will also be established...

  16. Intruder Polarization.

    DTIC Science & Technology

    1985-12-01

    mentioned project whose objective was to produce 0DIST~i8LT1,vAAILABILITr OF ABSTRACT 21 ABSTRACT SEC R.ITY CLASSiF CATION ~ .~ C.,NCLASSIFIEDI’.,NLMITED M ...wave propagation vector m . Thl-; -’esns that, f we wnnt to perform the tv-o-lirensional inverse 7’,urier ’ransformar i7! (to transform from i-space...nropaqition vector ~,which was used extensively in the -’-i-al npnro)arh, is still retained in the new approach. It is used in nodellin" m . if

  17. Optimizing interplanetary trajectories with deep space maneuvers. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Navagh, John

    1993-01-01

    Analysis of interplanetary trajectories is a crucial area for both manned and unmanned missions of the Space Exploration Initiative. A deep space maneuver (DSM) can improve a trajectory in much the same way as a planetary swingby. However, instead of using a gravitational field to alter the trajectory, the on-board propulsion system of the spacecraft is used when the vehicle is not near a planet. The purpose is to develop an algorithm to determine where and when to use deep space maneuvers to reduce the cost of a trajectory. The approach taken to solve this problem uses primer vector theory in combination with a non-linear optimizing program to minimize Delta(V). A set of necessary conditions on the primer vector is shown to indicate whether a deep space maneuver will be beneficial. Deep space maneuvers are applied to a round trip mission to Mars to determine their effect on the launch opportunities. Other studies which were performed include cycler trajectories and Mars mission abort scenarios. It was found that the software developed was able to locate quickly DSM's which lower the total Delta(V) on these trajectories.

  18. Optimizing interplanetary trajectories with deep space maneuvers

    NASA Astrophysics Data System (ADS)

    Navagh, John

    1993-09-01

    Analysis of interplanetary trajectories is a crucial area for both manned and unmanned missions of the Space Exploration Initiative. A deep space maneuver (DSM) can improve a trajectory in much the same way as a planetary swingby. However, instead of using a gravitational field to alter the trajectory, the on-board propulsion system of the spacecraft is used when the vehicle is not near a planet. The purpose is to develop an algorithm to determine where and when to use deep space maneuvers to reduce the cost of a trajectory. The approach taken to solve this problem uses primer vector theory in combination with a non-linear optimizing program to minimize Delta(V). A set of necessary conditions on the primer vector is shown to indicate whether a deep space maneuver will be beneficial. Deep space maneuvers are applied to a round trip mission to Mars to determine their effect on the launch opportunities. Other studies which were performed include cycler trajectories and Mars mission abort scenarios. It was found that the software developed was able to locate quickly DSM's which lower the total Delta(V) on these trajectories.

  19. A Novel Sensor for Attitude Determination Using Global Positioning System Signals

    NASA Technical Reports Server (NTRS)

    Crassidis, John L.; Quinn, David A.; Markley, F. Landis; McCullough, Jon D.

    1998-01-01

    An entirely new sensor approach for attitude determination using Global Positioning System (GPS) signals is developed. The concept involves the use of multiple GPS antenna elements arrayed on a single sensor head to provide maximum GPS space vehicle availability. A number of sensor element configurations are discussed. In addition to the navigation function, the array is used to find which GPS space vehicles are within the field-of-view of each antenna element. Attitude determination is performed by considering the sightline vectors of the found GPS space vehicles together with the fixed boresight vectors of the individual antenna elements. This approach has clear advantages over the standard differential carrier-phase approach. First, errors induced by multipath effects can be significantly reduced or eliminated altogether. Also, integer ambiguity resolution is not required, nor do line biases need to be determined through costly and cumbersome self-surveys. Furthermore, the new sensor does not require individual antennas to be physically separated to form interferometric baselines to determine attitude. Finally, development potential of the new sensor is limited only by antenna and receiver technology development unlike the physical limitations of the current interferometric attitude determination scheme. Simulation results indicate that accuracies of about 1 degree (3 omega) are possible.

  20. Parametric State Space Structuring

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Tilgner, Marco

    1997-01-01

    Structured approaches based on Kronecker operators for the description and solution of the infinitesimal generator of a continuous-time Markov chains are receiving increasing interest. However, their main advantage, a substantial reduction in the memory requirements during the numerical solution, comes at a price. Methods based on the "potential state space" allocate a probability vector that might be much larger than actually needed. Methods based on the "actual state space", instead, have an additional logarithmic overhead. We present an approach that realizes the advantages of both methods with none of their disadvantages, by partitioning the local state spaces of each submodel. We apply our results to a model of software rendezvous, and show how they reduce memory requirements while, at the same time, improving the efficiency of the computation.

  1. Mathematics of Quantization and Quantum Fields

    NASA Astrophysics Data System (ADS)

    Dereziński, Jan; Gérard, Christian

    2013-03-01

    Preface; 1. Vector spaces; 2. Operators in Hilbert spaces; 3. Tensor algebras; 4. Analysis in L2(Rd); 5. Measures; 6. Algebras; 7. Anti-symmetric calculus; 8. Canonical commutation relations; 9. CCR on Fock spaces; 10. Symplectic invariance of CCR in finite dimensions; 11. Symplectic invariance of the CCR on Fock spaces; 12. Canonical anti-commutation relations; 13. CAR on Fock spaces; 14. Orthogonal invariance of CAR algebras; 15. Clifford relations; 16. Orthogonal invariance of the CAR on Fock spaces; 17. Quasi-free states; 18. Dynamics of quantum fields; 19. Quantum fields on space-time; 20. Diagrammatics; 21. Euclidean approach for bosons; 22. Interacting bosonic fields; Subject index; Symbols index.

  2. Tensor Sparse Coding for Positive Definite Matrices.

    PubMed

    Sivalingam, Ravishankar; Boley, Daniel; Morellas, Vassilios; Papanikolopoulos, Nikos

    2013-08-02

    In recent years, there has been extensive research on sparse representation of vector-valued signals. In the matrix case, the data points are merely vectorized and treated as vectors thereafter (for e.g., image patches). However, this approach cannot be used for all matrices, as it may destroy the inherent structure of the data. Symmetric positive definite (SPD) matrices constitute one such class of signals, where their implicit structure of positive eigenvalues is lost upon vectorization. This paper proposes a novel sparse coding technique for positive definite matrices, which respects the structure of the Riemannian manifold and preserves the positivity of their eigenvalues, without resorting to vectorization. Synthetic and real-world computer vision experiments with region covariance descriptors demonstrate the need for and the applicability of the new sparse coding model. This work serves to bridge the gap between the sparse modeling paradigm and the space of positive definite matrices.

  3. Tensor sparse coding for positive definite matrices.

    PubMed

    Sivalingam, Ravishankar; Boley, Daniel; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2014-03-01

    In recent years, there has been extensive research on sparse representation of vector-valued signals. In the matrix case, the data points are merely vectorized and treated as vectors thereafter (for example, image patches). However, this approach cannot be used for all matrices, as it may destroy the inherent structure of the data. Symmetric positive definite (SPD) matrices constitute one such class of signals, where their implicit structure of positive eigenvalues is lost upon vectorization. This paper proposes a novel sparse coding technique for positive definite matrices, which respects the structure of the Riemannian manifold and preserves the positivity of their eigenvalues, without resorting to vectorization. Synthetic and real-world computer vision experiments with region covariance descriptors demonstrate the need for and the applicability of the new sparse coding model. This work serves to bridge the gap between the sparse modeling paradigm and the space of positive definite matrices.

  4. Searching for transcription factor binding sites in vector spaces

    PubMed Central

    2012-01-01

    Background Computational approaches to transcription factor binding site identification have been actively researched in the past decade. Learning from known binding sites, new binding sites of a transcription factor in unannotated sequences can be identified. A number of search methods have been introduced over the years. However, one can rarely find one single method that performs the best on all the transcription factors. Instead, to identify the best method for a particular transcription factor, one usually has to compare a handful of methods. Hence, it is highly desirable for a method to perform automatic optimization for individual transcription factors. Results We proposed to search for transcription factor binding sites in vector spaces. This framework allows us to identify the best method for each individual transcription factor. We further introduced two novel methods, the negative-to-positive vector (NPV) and optimal discriminating vector (ODV) methods, to construct query vectors to search for binding sites in vector spaces. Extensive cross-validation experiments showed that the proposed methods significantly outperformed the ungapped likelihood under positional background method, a state-of-the-art method, and the widely-used position-specific scoring matrix method. We further demonstrated that motif subtypes of a TF can be readily identified in this framework and two variants called the k NPV and k ODV methods benefited significantly from motif subtype identification. Finally, independent validation on ChIP-seq data showed that the ODV and NPV methods significantly outperformed the other compared methods. Conclusions We conclude that the proposed framework is highly flexible. It enables the two novel methods to automatically identify a TF-specific subspace to search for binding sites. Implementations are available as source code at: http://biogrid.engr.uconn.edu/tfbs_search/. PMID:23244338

  5. Multiscale vector fields for image pattern recognition

    NASA Technical Reports Server (NTRS)

    Low, Kah-Chan; Coggins, James M.

    1990-01-01

    A uniform processing framework for low-level vision computing in which a bank of spatial filters maps the image intensity structure at each pixel into an abstract feature space is proposed. Some properties of the filters and the feature space are described. Local orientation is measured by a vector sum in the feature space as follows: each filter's preferred orientation along with the strength of the filter's output determine the orientation and the length of a vector in the feature space; the vectors for all filters are summed to yield a resultant vector for a particular pixel and scale. The orientation of the resultant vector indicates the local orientation, and the magnitude of the vector indicates the strength of the local orientation preference. Limitations of the vector sum method are discussed. Investigations show that the processing framework provides a useful, redundant representation of image structure across orientation and scale.

  6. Vector calculus in non-integer dimensional space and its applications to fractal media

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2015-02-01

    We suggest a generalization of vector calculus for the case of non-integer dimensional space. The first and second orders operations such as gradient, divergence, the scalar and vector Laplace operators for non-integer dimensional space are defined. For simplification we consider scalar and vector fields that are independent of angles. We formulate a generalization of vector calculus for rotationally covariant scalar and vector functions. This generalization allows us to describe fractal media and materials in the framework of continuum models with non-integer dimensional space. As examples of application of the suggested calculus, we consider elasticity of fractal materials (fractal hollow ball and fractal cylindrical pipe with pressure inside and outside), steady distribution of heat in fractal media, electric field of fractal charged cylinder. We solve the correspondent equations for non-integer dimensional space models.

  7. Discontinuous finite element method for vector radiative transfer

    NASA Astrophysics Data System (ADS)

    Wang, Cun-Hai; Yi, Hong-Liang; Tan, He-Ping

    2017-03-01

    The discontinuous finite element method (DFEM) is applied to solve the vector radiative transfer in participating media. The derivation in a discrete form of the vector radiation governing equations is presented, in which the angular space is discretized by the discrete-ordinates approach with a local refined modification, and the spatial domain is discretized into finite non-overlapped discontinuous elements. The elements in the whole solution domain are connected by modelling the boundary numerical flux between adjacent elements, which makes the DFEM numerically stable for solving radiative transfer equations. Several various problems of vector radiative transfer are tested to verify the performance of the developed DFEM, including vector radiative transfer in a one-dimensional parallel slab containing a Mie/Rayleigh/strong forward scattering medium and a two-dimensional square medium. The fact that DFEM results agree very well with the benchmark solutions in published references shows that the developed DFEM in this paper is accurate and effective for solving vector radiative transfer problems.

  8. Beaconless Pointing for Deep-Space Optical Communication

    NASA Technical Reports Server (NTRS)

    Swank, Aaron J.; Aretskin-Hariton, Eliot; Le, Dzu K.; Sands, Obed S.; Wroblewski, Adam

    2016-01-01

    Free space optical communication is of interest to NASA as a complement to existing radio frequency communication methods. The potential for an increase in science data return capability over current radio-frequency communications is the primary objective. Deep space optical communication requires laser beam pointing accuracy on the order of a few microradians. The laser beam pointing approach discussed here operates without the aid of a terrestrial uplink beacon. Precision pointing is obtained from an on-board star tracker in combination with inertial rate sensors and an outgoing beam reference vector. The beaconless optical pointing system presented in this work is the current approach for the Integrated Radio and Optical Communication (iROC) project.

  9. Assessing semantic similarity of texts - Methods and algorithms

    NASA Astrophysics Data System (ADS)

    Rozeva, Anna; Zerkova, Silvia

    2017-12-01

    Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.

  10. A new Bayesian recursive technique for parameter estimation

    NASA Astrophysics Data System (ADS)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  11. Airborne Evaluation and Demonstration of a Time-Based Airborne Inter-Arrival Spacing Tool

    NASA Technical Reports Server (NTRS)

    Lohr, Gary W.; Oseguera-Lohr, Rosa M.; Abbott, Terence S.; Capron, William R.; Howell, Charles T.

    2005-01-01

    An airborne tool has been developed that allows an aircraft to obtain a precise inter-arrival time-based spacing interval from the preceding aircraft. The Advanced Terminal Area Approach Spacing (ATAAS) tool uses Automatic Dependent Surveillance-Broadcast (ADS-B) data to compute speed commands for the ATAAS-equipped aircraft to obtain this inter-arrival spacing behind another aircraft. The tool was evaluated in an operational environment at the Chicago O'Hare International Airport and in the surrounding terminal area with three participating aircraft flying fixed route area navigation (RNAV) paths and vector scenarios. Both manual and autothrottle speed management were included in the scenarios to demonstrate the ability to use ATAAS with either method of speed management. The results on the overall delivery precision of the tool, based on a target spacing of 90 seconds, were a mean of 90.8 seconds with a standard deviation of 7.7 seconds. The results for the RNAV and vector cases were, respectively, M=89.3, SD=4.9 and M=91.7, SD=9.0.

  12. Integral transformation solution of free-space cylindrical vector beams and prediction of modified Bessel-Gaussian vector beams.

    PubMed

    Li, Chun-Fang

    2007-12-15

    A unified description of free-space cylindrical vector beams is presented that is an integral transformation solution to the vector Helmholtz equation and the transversality condition. In the paraxial condition, this solution not only includes the known J(1) Bessel-Gaussian vector beam and the axisymmetric Laguerre-Gaussian vector beam that were obtained by solving the paraxial wave equations but also predicts two kinds of vector beam, called a modified Bessel-Gaussian vector beam.

  13. Utilizing the Structure and Content Information for XML Document Clustering

    NASA Astrophysics Data System (ADS)

    Tran, Tien; Kutty, Sangeetha; Nayak, Richi

    This paper reports on the experiments and results of a clustering approach used in the INEX 2008 document mining challenge. The clustering approach utilizes both the structure and content information of the Wikipedia XML document collection. A latent semantic kernel (LSK) is used to measure the semantic similarity between XML documents based on their content features. The construction of a latent semantic kernel involves the computing of singular vector decomposition (SVD). On a large feature space matrix, the computation of SVD is very expensive in terms of time and memory requirements. Thus in this clustering approach, the dimension of the document space of a term-document matrix is reduced before performing SVD. The document space reduction is based on the common structural information of the Wikipedia XML document collection. The proposed clustering approach has shown to be effective on the Wikipedia collection in the INEX 2008 document mining challenge.

  14. A Re-Unification of Two Competing Models for Document Retrieval.

    ERIC Educational Resources Information Center

    Bodoff, David

    1999-01-01

    Examines query-oriented versus document-oriented information retrieval and feedback learning. Highlights include a reunification of the two approaches for probabilistic document retrieval and for vector space model (VSM) retrieval; learning in VSM and in probabilistic models; multi-dimensional scaling; and ongoing field studies. (LRW)

  15. Intersection of Three Planes Revisited--An Algebraic Approach

    ERIC Educational Resources Information Center

    Trenkler, Götz; Trenkler, Dietrich

    2017-01-01

    Given three planes in space, a complete characterization of their intersection is provided. Special attention is paid to the case when the intersection set does not exist of one point only. Besides the vector cross product, the tool of generalized inverse of a matrix is used extensively.

  16. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory

    NASA Astrophysics Data System (ADS)

    Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.

    2017-09-01

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  17. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory.

    PubMed

    Lee, M; Leiter, K; Eisner, C; Breuer, A; Wang, X

    2017-09-21

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  18. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  19. An Intelligent System for Monitoring the Microgravity Environment Quality On-Board the International Space Station

    NASA Technical Reports Server (NTRS)

    Lin, Paul P.; Jules, Kenol

    2002-01-01

    An intelligent system for monitoring the microgravity environment quality on-board the International Space Station is presented. The monitoring system uses a new approach combining Kohonen's self-organizing feature map, learning vector quantization, and back propagation neural network to recognize and classify the known and unknown patterns. Finally, fuzzy logic is used to assess the level of confidence associated with each vibrating source activation detected by the system.

  20. Image search engine with selective filtering and feature-element-based classification

    NASA Astrophysics Data System (ADS)

    Li, Qing; Zhang, Yujin; Dai, Shengyang

    2001-12-01

    With the growth of Internet and storage capability in recent years, image has become a widespread information format in World Wide Web. However, it has become increasingly harder to search for images of interest, and effective image search engine for the WWW needs to be developed. We propose in this paper a selective filtering process and a novel approach for image classification based on feature element in the image search engine we developed for the WWW. First a selective filtering process is embedded in a general web crawler to filter out the meaningless images with GIF format. Two parameters that can be obtained easily are used in the filtering process. Our classification approach first extract feature elements from images instead of feature vectors. Compared with feature vectors, feature elements can better capture visual meanings of the image according to subjective perception of human beings. Different from traditional image classification method, our classification approach based on feature element doesn't calculate the distance between two vectors in the feature space, while trying to find associations between feature element and class attribute of the image. Experiments are presented to show the efficiency of the proposed approach.

  1. Graph theory approach to the eigenvalue problem of large space structures

    NASA Technical Reports Server (NTRS)

    Reddy, A. S. S. R.; Bainum, P. M.

    1981-01-01

    Graph theory is used to obtain numerical solutions to eigenvalue problems of large space structures (LSS) characterized by a state vector of large dimensions. The LSS are considered as large, flexible systems requiring both orientation and surface shape control. Graphic interpretation of the determinant of a matrix is employed to reduce a higher dimensional matrix into combinations of smaller dimensional sub-matrices. The reduction is implemented by means of a Boolean equivalent of the original matrices formulated to obtain smaller dimensional equivalents of the original numerical matrix. Computation time becomes less and more accurate solutions are possible. An example is provided in the form of a free-free square plate. Linearized system equations and numerical values of a stiffness matrix are presented, featuring a state vector with 16 components.

  2. Artificial gravity: Phyiological perspectives for long-term space exploration

    NASA Astrophysics Data System (ADS)

    di Prampero, P.; Antonutto, G.

    2005-08-01

    We suggested previously the Twin Bike System (TBS) as a possible countermeasure to prevent cardiovascular deconditioning during long term space flight. The TBS consists of two bicycles, operated by the astronauts, moving at the very same speed, but in the opposite sense, along the inner wall of a cylindrical space module, thus generating a centrifugal acceleration vector, mimicking gravity. To gain some insight on the effectiveness of the TBS we hereby propose a similar approach (the Mono Bike System, MBS) to be tested during bed rest on Earth.

  3. Semantic Search of Web Services

    ERIC Educational Resources Information Center

    Hao, Ke

    2013-01-01

    This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…

  4. River flow prediction using hybrid models of support vector regression with the wavelet transform, singular spectrum analysis and chaotic approach

    NASA Astrophysics Data System (ADS)

    Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal

    2018-06-01

    Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.

  5. Ring polymer dynamics in curved spaces

    NASA Astrophysics Data System (ADS)

    Wolf, S.; Curotto, E.

    2012-07-01

    We formulate an extension of the ring polymer dynamics approach to curved spaces using stereographic projection coordinates. We test the theory by simulating the particle in a ring, {T}^1, mapped by a stereographic projection using three potentials. Two of these are quadratic, and one is a nonconfining sinusoidal model. We propose a new class of algorithms for the integration of the ring polymer Hamilton equations in curved spaces. These are designed to improve the energy conservation of symplectic integrators based on the split operator approach. For manifolds, the position-position autocorrelation function can be formulated in numerous ways. We find that the position-position autocorrelation function computed from configurations in the Euclidean space {R}^2 that contains {T}^1 as a submanifold has the best statistical properties. The agreement with exact results obtained with vector space methods is excellent for all three potentials, for all values of time in the interval simulated, and for a relatively broad range of temperatures.

  6. Real time hardware implementation of power converters for grid integration of distributed generation and STATCOM systems

    NASA Astrophysics Data System (ADS)

    Jaithwa, Ishan

    Deployment of smart grid technologies is accelerating. Smart grid enables bidirectional flows of energy and energy-related communications. The future electricity grid will look very different from today's power system. Large variable renewable energy sources will provide a greater portion of electricity, small DERs and energy storage systems will become more common, and utilities will operate many different kinds of energy efficiency. All of these changes will add complexity to the grid and require operators to be able to respond to fast dynamic changes to maintain system stability and security. This thesis investigates advanced control technology for grid integration of renewable energy sources and STATCOM systems by verifying them on real time hardware experiments using two different systems: d SPACE and OPAL RT. Three controls: conventional, direct vector control and the intelligent Neural network control were first simulated using Matlab to check the stability and safety of the system and were then implemented on real time hardware using the d SPACE and OPAL RT systems. The thesis then shows how dynamic-programming (DP) methods employed to train the neural networks are better than any other controllers where, an optimal control strategy is developed to ensure effective power delivery and to improve system stability. Through real time hardware implementation it is proved that the neural vector control approach produces the fastest response time, low overshoot, and, the best performance compared to the conventional standard vector control method and DCC vector control technique. Finally the entrepreneurial approach taken to drive the technologies from the lab to market via ORANGE ELECTRIC is discussed in brief.

  7. Multiple sensor fault diagnosis for dynamic processes.

    PubMed

    Li, Cheng-Chih; Jeng, Jyh-Cheng

    2010-10-01

    Modern industrial plants are usually large scaled and contain a great amount of sensors. Sensor fault diagnosis is crucial and necessary to process safety and optimal operation. This paper proposes a systematic approach to detect, isolate and identify multiple sensor faults for multivariate dynamic systems. The current work first defines deviation vectors for sensor observations, and further defines and derives the basic sensor fault matrix (BSFM), consisting of the normalized basic fault vectors, by several different methods. By projecting a process deviation vector to the space spanned by BSFM, this research uses a vector with the resulted weights on each direction for multiple sensor fault diagnosis. This study also proposes a novel monitoring index and derives corresponding sensor fault detectability. The study also utilizes that vector to isolate and identify multiple sensor faults, and discusses the isolatability and identifiability. Simulation examples and comparison with two conventional PCA-based contribution plots are presented to demonstrate the effectiveness of the proposed methodology. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Nonnormal operators in physics, a singular-vectors approach: illustration in polarization optics.

    PubMed

    Tudor, Tiberiu

    2016-04-20

    The singular-vectors analysis of a general nonnormal operator defined on a finite-dimensional complex vector space is given in the frame of a pure operatorial ("nonmatrix," "coordinate-free") approach, performed in a Dirac language. The general results are applied in the field of polarization optics, where the nonnormal operators are widespread as operators of various polarization devices. Two nonnormal polarization devices representative for the class of nonnormal and even pathological operators-the standard two-layer elliptical ideal polarizer (singular operator) and the three-layer ambidextrous ideal polarizer (singular and defective operator)-are analyzed in detail. It is pointed out that the unitary polar component of the operator exists and preserves, in such pathological case too, its role of converting the input singular basis of the operator in its output singular basis. It is shown that for any nonnormal ideal polarizer a complementary one exists, so that the tandem of their operators uniquely determines their (common) unitary polar component.

  9. Mapping higher-order relations between brain structure and function with embedded vector representations of connectomes.

    PubMed

    Rosenthal, Gideon; Váša, František; Griffa, Alessandra; Hagmann, Patric; Amico, Enrico; Goñi, Joaquín; Avidan, Galia; Sporns, Olaf

    2018-06-05

    Connectomics generates comprehensive maps of brain networks, represented as nodes and their pairwise connections. The functional roles of nodes are defined by their direct and indirect connectivity with the rest of the network. However, the network context is not directly accessible at the level of individual nodes. Similar problems in language processing have been addressed with algorithms such as word2vec that create embeddings of words and their relations in a meaningful low-dimensional vector space. Here we apply this approach to create embedded vector representations of brain networks or connectome embeddings (CE). CE can characterize correspondence relations among brain regions, and can be used to infer links that are lacking from the original structural diffusion imaging, e.g., inter-hemispheric homotopic connections. Moreover, we construct predictive deep models of functional and structural connectivity, and simulate network-wide lesion effects using the face processing system as our application domain. We suggest that CE offers a novel approach to revealing relations between connectome structure and function.

  10. On the estimation of the domain of attraction for discrete-time switched and hybrid nonlinear systems

    NASA Astrophysics Data System (ADS)

    Kit Luk, Chuen; Chesi, Graziano

    2015-11-01

    This paper addresses the estimation of the domain of attraction for discrete-time nonlinear systems where the vector field is subject to changes. First, the paper considers the case of switched systems, where the vector field is allowed to arbitrarily switch among the elements of a finite family. Second, the paper considers the case of hybrid systems, where the state space is partitioned into several regions described by polynomial inequalities, and the vector field is defined on each region independently from the other ones. In both cases, the problem consists of computing the largest sublevel set of a Lyapunov function included in the domain of attraction. An approach is proposed for solving this problem based on convex programming, which provides a guaranteed inner estimate of the sought sublevel set. The conservatism of the provided estimate can be decreased by increasing the size of the optimisation problem. Some numerical examples illustrate the proposed approach.

  11. Soft and hard classification by reproducing kernel Hilbert space methods.

    PubMed

    Wahba, Grace

    2002-12-24

    Reproducing kernel Hilbert space (RKHS) methods provide a unified context for solving a wide variety of statistical modelling and function estimation problems. We consider two such problems: We are given a training set [yi, ti, i = 1, em leader, n], where yi is the response for the ith subject, and ti is a vector of attributes for this subject. The value of y(i) is a label that indicates which category it came from. For the first problem, we wish to build a model from the training set that assigns to each t in an attribute domain of interest an estimate of the probability pj(t) that a (future) subject with attribute vector t is in category j. The second problem is in some sense less ambitious; it is to build a model that assigns to each t a label, which classifies a future subject with that t into one of the categories or possibly "none of the above." The approach to the first of these two problems discussed here is a special case of what is known as penalized likelihood estimation. The approach to the second problem is known as the support vector machine. We also note some alternate but closely related approaches to the second problem. These approaches are all obtained as solutions to optimization problems in RKHS. Many other problems, in particular the solution of ill-posed inverse problems, can be obtained as solutions to optimization problems in RKHS and are mentioned in passing. We caution the reader that although a large literature exists in all of these topics, in this inaugural article we are selectively highlighting work of the author, former students, and other collaborators.

  12. The Vector Space as a Unifying Concept in School Mathematics.

    ERIC Educational Resources Information Center

    Riggle, Timothy Andrew

    The purpose of this study was to show how the concept of vector space can serve as a unifying thread for mathematics programs--elementary school to pre-calculus college level mathematics. Indicated are a number of opportunities to demonstrate how emphasis upon the vector space structure can enhance the organization of the mathematics curriculum.…

  13. Feature-space-based FMRI analysis using the optimal linear transformation.

    PubMed

    Sun, Fengrong; Morris, Drew; Lee, Wayne; Taylor, Margot J; Mills, Travis; Babyn, Paul S

    2010-09-01

    The optimal linear transformation (OLT), an image analysis technique of feature space, was first presented in the field of MRI. This paper proposes a method of extending OLT from MRI to functional MRI (fMRI) to improve the activation-detection performance over conventional approaches of fMRI analysis. In this method, first, ideal hemodynamic response time series for different stimuli were generated by convolving the theoretical hemodynamic response model with the stimulus timing. Second, constructing hypothetical signature vectors for different activity patterns of interest by virtue of the ideal hemodynamic responses, OLT was used to extract features of fMRI data. The resultant feature space had particular geometric clustering properties. It was then classified into different groups, each pertaining to an activity pattern of interest; the applied signature vector for each group was obtained by averaging. Third, using the applied signature vectors, OLT was applied again to generate fMRI composite images with high SNRs for the desired activity patterns. Simulations and a blocked fMRI experiment were employed for the method to be verified and compared with the general linear model (GLM)-based analysis. The simulation studies and the experimental results indicated the superiority of the proposed method over the GLM-based analysis in detecting brain activities.

  14. Gaussian statistics for palaeomagnetic vectors

    USGS Publications Warehouse

    Love, J.J.; Constable, C.G.

    2003-01-01

    With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimoda) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Re??union, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.

  15. Gaussian statistics for palaeomagnetic vectors

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Constable, C. G.

    2003-03-01

    With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimodal) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Réunion, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.

  16. Automatic Cataloguing and Searching for Retrospective Data by Use of OCR Text.

    ERIC Educational Resources Information Center

    Tseng, Yuen-Hsien

    2001-01-01

    Describes efforts in supporting information retrieval from OCR (optical character recognition) degraded text. Reports on approaches used in an automatic cataloging and searching contest for books in multiple languages, including a vector space retrieval model, an n-gram indexing method, and a weighting scheme; and discusses problems of Asian…

  17. An Institutional Approach to University Mathematics Education: From Dual Vector Spaces to Questioning the World

    ERIC Educational Resources Information Center

    Winsløw, Carl; Barquero, Berta; De Vleeschouwer, Martine; Hardy, Nadia

    2014-01-01

    University mathematics education (UME) is considered, in this paper, as a kind of "didactic practice"--characterised by institutional settings and by the purpose of inducting students into "mathematical practices." We present a research programme -- the anthropological theory of the didactic (ATD)--in which this rough…

  18. Fractal electrodynamics via non-integer dimensional space approach

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2015-09-01

    Using the recently suggested vector calculus for non-integer dimensional space, we consider electrodynamics problems in isotropic case. This calculus allows us to describe fractal media in the framework of continuum models with non-integer dimensional space. We consider electric and magnetic fields of fractal media with charges and currents in the framework of continuum models with non-integer dimensional spaces. An application of the fractal Gauss's law, the fractal Ampere's circuital law, the fractal Poisson equation for electric potential, and equation for fractal stream of charges are suggested. Lorentz invariance and speed of light in fractal electrodynamics are discussed. An expression for effective refractive index of non-integer dimensional space is suggested.

  19. Rhotrix Vector Spaces

    ERIC Educational Resources Information Center

    Aminu, Abdulhadi

    2010-01-01

    By rhotrix we understand an object that lies in some way between (n x n)-dimensional matrices and (2n - 1) x (2n - 1)-dimensional matrices. Representation of vectors in rhotrices is different from the representation of vectors in matrices. A number of vector spaces in matrices and their properties are known. On the other hand, little seems to be…

  20. Approach to fitting parameters and clustering for characterising measured voltage dips based on two-dimensional polarisation ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    García-Sánchez, Tania; Gómez-Lázaro, Emilio; Muljadi, E.

    An alternative approach to characterise real voltage dips is proposed and evaluated in this study. The proposed methodology is based on voltage-space vector solutions, identifying parameters for ellipses trajectories by using the least-squares algorithm applied on a sliding window along the disturbance. The most likely patterns are then estimated through a clustering process based on the k-means algorithm. The objective is to offer an efficient and easily implemented alternative to characterise faults and visualise the most likely instantaneous phase-voltage evolution during events through their corresponding voltage-space vector trajectories. This novel solution minimises the data to be stored but maintains extensivemore » information about the dips including starting and ending transients. The proposed methodology has been applied satisfactorily to real voltage dips obtained from intensive field-measurement campaigns carried out in a Spanish wind power plant up to a time period of several years. A comparison to traditional minimum root mean square-voltage and time-duration classifications is also included in this study.« less

  1. A Short-Term and High-Resolution System Load Forecasting Approach Using Support Vector Regression with Hybrid Parameters Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang

    This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.« less

  2. Fast higher-order MR image reconstruction using singular-vector separation.

    PubMed

    Wilm, Bertram J; Barmet, Christoph; Pruessmann, Klaas P

    2012-07-01

    Medical resonance imaging (MRI) conventionally relies on spatially linear gradient fields for image encoding. However, in practice various sources of nonlinear fields can perturb the encoding process and give rise to artifacts unless they are suitably addressed at the reconstruction level. Accounting for field perturbations that are neither linear in space nor constant over time, i.e., dynamic higher-order fields, is particularly challenging. It was previously shown to be feasible with conjugate-gradient iteration. However, so far this approach has been relatively slow due to the need to carry out explicit matrix-vector multiplications in each cycle. In this work, it is proposed to accelerate higher-order reconstruction by expanding the encoding matrix such that fast Fourier transform can be employed for more efficient matrix-vector computation. The underlying principle is to represent the perturbing terms as sums of separable functions of space and time. Compact representations with this property are found by singular-vector analysis of the perturbing matrix. Guidelines for balancing the accuracy and speed of the resulting algorithm are derived by error propagation analysis. The proposed technique is demonstrated for the case of higher-order field perturbations due to eddy currents caused by diffusion weighting. In this example, image reconstruction was accelerated by two orders of magnitude.

  3. Reduced conservatism in stability robustness bounds by state transformation

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.; Liang, Z.

    1986-01-01

    This note addresses the issue of 'conservatism' in the time domain stability robustness bounds obtained by the Liapunov approach. A state transformation is employed to improve the upper bounds on the linear time-varying perturbation of an asymptotically stable linear time-invariant system for robust stability. This improvement is due to the variance of the conservatism of the Liapunov approach with respect to the basis of the vector space in which the Liapunov function is constructed. Improved bounds are obtained, using a transformation, on elemental and vector norms of perturbations (i.e., structured perturbations) as well as on a matrix norm of perturbations (i.e., unstructured perturbations). For the case of a diagonal transformation, an algorithm is proposed to find the 'optimal' transformation. Several examples are presented to illustrate the proposed analysis.

  4. Thyra Abstract Interface Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe A.

    2005-09-01

    Thrya primarily defines a set of abstract C++ class interfaces needed for the development of abstract numerical atgorithms (ANAs) such as iterative linear solvers, transient solvers all the way up to optimization. At the foundation of these interfaces are abstract C++ classes for vectors, vector spaces, linear operators and multi-vectors. Also included in the Thyra package is C++ code for creating concrete vector, vector space, linear operator, and multi-vector subclasses as well as other utilities to aid in the development of ANAs. Currently, very general and efficient concrete subclass implementations exist for serial and SPMD in-core vectors and multi-vectors. Codemore » also currently exists for testing objects and providing composite objects such as product vectors.« less

  5. Robust stability of second-order systems

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.

    1993-01-01

    A feedback linearization technique is used in conjunction with passivity concepts to design robust controllers for space robots. It is assumed that bounded modeling uncertainties exist in the inertia matrix and the vector representing the coriolis, centripetal, and friction forces. Under these assumptions, the controller guarantees asymptotic tracking of the joint variables. A Lagrangian approach is used to develop a dynamic model for space robots. Closed-loop simulation results are illustrated for a simple case of a single link planar manipulator with freely floating base.

  6. Dynamics in the Decompositions Approach to Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Harding, John

    2017-12-01

    In Harding (Trans. Amer. Math. Soc. 348(5), 1839-1862 1996) it was shown that the direct product decompositions of any non-empty set, group, vector space, and topological space X form an orthomodular poset Fact X. This is the basis for a line of study in foundational quantum mechanics replacing Hilbert spaces with other types of structures. Here we develop dynamics and an abstract version of a time independent Schrödinger's equation in the setting of decompositions by considering representations of the group of real numbers in the automorphism group of the orthomodular poset Fact X of decompositions.

  7. Control-group feature normalization for multivariate pattern analysis of structural MRI data using the support vector machine.

    PubMed

    Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T

    2016-05-15

    Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Extended vector-tensor theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Rampei; Naruko, Atsushi; Yoshida, Daisuke, E-mail: rampei@th.phys.titech.ac.jp, E-mail: naruko@th.phys.titech.ac.jp, E-mail: yoshida@th.phys.titech.ac.jp

    Recently, several extensions of massive vector theory in curved space-time have been proposed in many literatures. In this paper, we consider the most general vector-tensor theories that contain up to two derivatives with respect to metric and vector field. By imposing a degeneracy condition of the Lagrangian in the context of ADM decomposition of space-time to eliminate an unwanted mode, we construct a new class of massive vector theories where five degrees of freedom can propagate, corresponding to three for massive vector modes and two for massless tensor modes. We find that the generalized Proca and the beyond generalized Procamore » theories up to the quartic Lagrangian, which should be included in this formulation, are degenerate theories even in curved space-time. Finally, introducing new metric and vector field transformations, we investigate the properties of thus obtained theories under such transformations.« less

  9. Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding

    PubMed Central

    Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping

    2015-01-01

    Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771

  10. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  11. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE PAGES

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...

    2016-05-03

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  12. Application of eco-friendly tools and eco-bio-social strategies to control dengue vectors in urban and peri-urban settings in Thailand

    PubMed Central

    Kittayapong, Pattamaporn; Thongyuan, Suporn; Olanratmanee, Phanthip; Aumchareoun, Worawit; Koyadun, Surachart; Kittayapong, Rungrith; Butraporn, Piyarat

    2012-01-01

    Background Dengue is considered one of the most important vector-borne diseases in Thailand. Its incidence is increasing despite routine implementation of national dengue control programmes. This study, conducted during 2010, aimed to demonstrate an application of integrated, community-based, eco-bio-social strategies in combination with locally-produced eco-friendly vector control tools in the dengue control programme, emphasizing urban and peri-urban settings in eastern Thailand. Methodology Three different community settings were selected and were randomly assigned to intervention and control clusters. Key community leaders and relevant governmental authorities were approached to participate in this intervention programme. Ecohealth volunteers were identified and trained in each study community. They were selected among active community health volunteers and were trained by public health experts to conduct vector control activities in their own communities using environmental management in combination with eco-friendly vector control tools. These trained ecohealth volunteers carried out outreach health education and vector control during household visits. Management of public spaces and public properties, especially solid waste management, was efficiently carried out by local municipalities. Significant reduction in the pupae per person index in the intervention clusters when compared to the control ones was used as a proxy to determine the impact of this programme. Results Our community-based dengue vector control programme demonstrated a significant reduction in the pupae per person index during entomological surveys which were conducted at two-month intervals from May 2010 for the total of six months in the intervention and control clusters. The programme also raised awareness in applying eco-friendly vector control approaches and increased intersectoral and household participation in dengue control activities. Conclusion An eco-friendly dengue vector control programme was successfully implemented in urban and peri-urban settings in Thailand, through intersectoral collaboration and practical action at household level, with a significant reduction in vector densities. PMID:23318236

  13. Application of eco-friendly tools and eco-bio-social strategies to control dengue vectors in urban and peri-urban settings in Thailand.

    PubMed

    Kittayapong, Pattamaporn; Thongyuan, Suporn; Olanratmanee, Phanthip; Aumchareoun, Worawit; Koyadun, Surachart; Kittayapong, Rungrith; Butraporn, Piyarat

    2012-12-01

    Dengue is considered one of the most important vector-borne diseases in Thailand. Its incidence is increasing despite routine implementation of national dengue control programmes. This study, conducted during 2010, aimed to demonstrate an application of integrated, community-based, eco-bio-social strategies in combination with locally-produced eco-friendly vector control tools in the dengue control programme, emphasizing urban and peri-urban settings in eastern Thailand. Three different community settings were selected and were randomly assigned to intervention and control clusters. Key community leaders and relevant governmental authorities were approached to participate in this intervention programme. Ecohealth volunteers were identified and trained in each study community. They were selected among active community health volunteers and were trained by public health experts to conduct vector control activities in their own communities using environmental management in combination with eco-friendly vector control tools. These trained ecohealth volunteers carried out outreach health education and vector control during household visits. Management of public spaces and public properties, especially solid waste management, was efficiently carried out by local municipalities. Significant reduction in the pupae per person index in the intervention clusters when compared to the control ones was used as a proxy to determine the impact of this programme. Our community-based dengue vector control programme demonstrated a significant reduction in the pupae per person index during entomological surveys which were conducted at two-month intervals from May 2010 for the total of six months in the intervention and control clusters. The programme also raised awareness in applying eco-friendly vector control approaches and increased intersectoral and household participation in dengue control activities. An eco-friendly dengue vector control programme was successfully implemented in urban and peri-urban settings in Thailand, through intersectoral collaboration and practical action at household level, with a significant reduction in vector densities.

  14. Material decomposition in an arbitrary number of dimensions using noise compensating projection

    NASA Astrophysics Data System (ADS)

    O'Donnell, Thomas; Halaweish, Ahmed; Cormode, David; Cheheltani, Rabee; Fayad, Zahi A.; Mani, Venkatesh

    2017-03-01

    Purpose: Multi-energy CT (e.g., dual energy or photon counting) facilitates the identification of certain compounds via data decomposition. However, the standard approach to decomposition (i.e., solving a system of linear equations) fails if - due to noise - a pixel's vector of HU values falls outside the boundary of values describing possible pure or mixed basis materials. Typically, this is addressed by either throwing away those pixels or projecting them onto the closest point on this boundary. However, when acquiring four (or more) energy volumes, the space bounded by three (or more) materials that may be found in the human body (either naturally or through injection) can be quite small. Noise may significantly limit the number of those pixels to be included within. Therefore, projection onto the boundary becomes an important option. But, projection in higher than 3 dimensional space is not possible with standard vector algebra: the cross-product is not defined. Methods: We describe a technique which employs Clifford Algebra to perform projection in an arbitrary number of dimensions. Clifford Algebra describes a manipulation of vectors that incorporates the concepts of addition, subtraction, multiplication, and division. Thereby, vectors may be operated on like scalars forming a true algebra. Results: We tested our approach on a phantom containing inserts of calcium, gadolinium, iodine, gold nanoparticles and mixtures of pairs thereof. Images were acquired on a prototype photon counting CT scanner under a range of threshold combinations. Comparison of the accuracy of different threshold combinations versus ground truth are presented. Conclusions: Material decomposition is possible with three or more materials and four or more energy thresholds using Clifford Algebra projection to mitigate noise.

  15. PWC-ICA: A Method for Stationary Ordered Blind Source Separation with Application to EEG.

    PubMed

    Ball, Kenneth; Bigdely-Shamlo, Nima; Mullen, Tim; Robbins, Kay

    2016-01-01

    Independent component analysis (ICA) is a class of algorithms widely applied to separate sources in EEG data. Most ICA approaches use optimization criteria derived from temporal statistical independence and are invariant with respect to the actual ordering of individual observations. We propose a method of mapping real signals into a complex vector space that takes into account the temporal order of signals and enforces certain mixing stationarity constraints. The resulting procedure, which we call Pairwise Complex Independent Component Analysis (PWC-ICA), performs the ICA in a complex setting and then reinterprets the results in the original observation space. We examine the performance of our candidate approach relative to several existing ICA algorithms for the blind source separation (BSS) problem on both real and simulated EEG data. On simulated data, PWC-ICA is often capable of achieving a better solution to the BSS problem than AMICA, Extended Infomax, or FastICA. On real data, the dipole interpretations of the BSS solutions discovered by PWC-ICA are physically plausible, are competitive with existing ICA approaches, and may represent sources undiscovered by other ICA methods. In conjunction with this paper, the authors have released a MATLAB toolbox that performs PWC-ICA on real, vector-valued signals.

  16. PWC-ICA: A Method for Stationary Ordered Blind Source Separation with Application to EEG

    PubMed Central

    Bigdely-Shamlo, Nima; Mullen, Tim; Robbins, Kay

    2016-01-01

    Independent component analysis (ICA) is a class of algorithms widely applied to separate sources in EEG data. Most ICA approaches use optimization criteria derived from temporal statistical independence and are invariant with respect to the actual ordering of individual observations. We propose a method of mapping real signals into a complex vector space that takes into account the temporal order of signals and enforces certain mixing stationarity constraints. The resulting procedure, which we call Pairwise Complex Independent Component Analysis (PWC-ICA), performs the ICA in a complex setting and then reinterprets the results in the original observation space. We examine the performance of our candidate approach relative to several existing ICA algorithms for the blind source separation (BSS) problem on both real and simulated EEG data. On simulated data, PWC-ICA is often capable of achieving a better solution to the BSS problem than AMICA, Extended Infomax, or FastICA. On real data, the dipole interpretations of the BSS solutions discovered by PWC-ICA are physically plausible, are competitive with existing ICA approaches, and may represent sources undiscovered by other ICA methods. In conjunction with this paper, the authors have released a MATLAB toolbox that performs PWC-ICA on real, vector-valued signals. PMID:27340397

  17. A short-term and high-resolution distribution system load forecasting approach using support vector regression with hybrid parameters optimization

    DOE PAGES

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard; ...

    2016-01-01

    This paper proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system. The performance of the proposed approach is compared to some classic methods in later sections of the paper.« less

  18. Chemical data visualization and analysis with incremental generative topographic mapping: big data challenge.

    PubMed

    Gaspar, Héléna A; Baskin, Igor I; Marcou, Gilles; Horvath, Dragos; Varnek, Alexandre

    2015-01-26

    This paper is devoted to the analysis and visualization in 2-dimensional space of large data sets of millions of compounds using the incremental version of generative topographic mapping (iGTM). The iGTM algorithm implemented in the in-house ISIDA-GTM program was applied to a database of more than 2 million compounds combining data sets of 36 chemicals suppliers and the NCI collection, encoded either by MOE descriptors or by MACCS keys. Taking advantage of the probabilistic nature of GTM, several approaches to data analysis were proposed. The chemical space coverage was evaluated using the normalized Shannon entropy. Different views of the data (property landscapes) were obtained by mapping various physical and chemical properties (molecular weight, aqueous solubility, LogP, etc.) onto the iGTM map. The superposition of these views helped to identify the regions in the chemical space populated by compounds with desirable physicochemical profiles and the suppliers providing them. The data sets similarity in the latent space was assessed by applying several metrics (Euclidean distance, Tanimoto and Bhattacharyya coefficients) to data probability distributions based on cumulated responsibility vectors. As a complementary approach, data sets were compared by considering them as individual objects on a meta-GTM map, built on cumulated responsibility vectors or property landscapes produced with iGTM. We believe that the iGTM methodology described in this article represents a fast and reliable way to analyze and visualize large chemical databases.

  19. Semantically enabled image similarity search

    NASA Astrophysics Data System (ADS)

    Casterline, May V.; Emerick, Timothy; Sadeghi, Kolia; Gosse, C. A.; Bartlett, Brent; Casey, Jason

    2015-05-01

    Georeferenced data of various modalities are increasingly available for intelligence and commercial use, however effectively exploiting these sources demands a unified data space capable of capturing the unique contribution of each input. This work presents a suite of software tools for representing geospatial vector data and overhead imagery in a shared high-dimension vector or embedding" space that supports fused learning and similarity search across dissimilar modalities. While the approach is suitable for fusing arbitrary input types, including free text, the present work exploits the obvious but computationally difficult relationship between GIS and overhead imagery. GIS is comprised of temporally-smoothed but information-limited content of a GIS, while overhead imagery provides an information-rich but temporally-limited perspective. This processing framework includes some important extensions of concepts in literature but, more critically, presents a means to accomplish them as a unified framework at scale on commodity cloud architectures.

  20. Deep learning of support vector machines with class probability output networks.

    PubMed

    Kim, Sangwook; Yu, Zhibin; Kil, Rhee Man; Lee, Minho

    2015-04-01

    Deep learning methods endeavor to learn features automatically at multiple levels and allow systems to learn complex functions mapping from the input space to the output space for the given data. The ability to learn powerful features automatically is increasingly important as the volume of data and range of applications of machine learning methods continues to grow. This paper proposes a new deep architecture that uses support vector machines (SVMs) with class probability output networks (CPONs) to provide better generalization power for pattern classification problems. As a result, deep features are extracted without additional feature engineering steps, using multiple layers of the SVM classifiers with CPONs. The proposed structure closely approaches the ideal Bayes classifier as the number of layers increases. Using a simulation of classification problems, the effectiveness of the proposed method is demonstrated. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Space-time variability of citrus leprosis as strategic planning for crop management.

    PubMed

    Andrade, Daniel J; Lorençon, José R; Siqueira, Diego S; Novelli, Valdenice M; Bassanezi, Renato B

    2018-01-31

    Citrus leprosis is the most important viral disease of citrus. Knowledge of its spatiotemporal structure is fundamental to a representative sampling plan focused on the disease control approach. Such a well-crafted sampling design helps to reduce pesticide use in agriculture to control pests and diseases. Despite the use of acaricides to control citrus leprosis vector (Brevipalpus spp.) populations, the disease has spread rapidly through experimental areas. Citrus leprosis has an aggregate spatial distribution, with high dependence among symptomatic plants. Temporal variation in disease incidence increased among symptomatic plants by 4% per month. Use of acaricides alone to control the vector of leprosis is insufficient to avoid its incidence in healthy plants. Preliminary investigation into the time and space variation in the incidence of the disease is fundamental to select a sampling plan and determine effective strategies for disease management. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  2. On orthogonal expansions of the space of vector functions which are square-summable over a given domain and the vector analysis operators

    NASA Technical Reports Server (NTRS)

    Bykhovskiy, E. B.; Smirnov, N. V.

    1983-01-01

    The Hilbert space L2(omega) of vector functions is studied. A breakdown of L2(omega) into orthogonal subspaces is discussed and the properties of the operators for projection onto these subspaces are investigated from the standpoint of preserving the differential properties of the vectors being projected. Finally, the properties of the operators are examined.

  3. Bundles over nearly-Kahler homogeneous spaces in heterotic string theory

    NASA Astrophysics Data System (ADS)

    Klaput, Michael; Lukas, Andre; Matti, Cyril

    2011-09-01

    We construct heterotic vacua based on six-dimensional nearly-Kahler homogeneous manifolds and non-trivial vector bundles thereon. Our examples are based on three specific group coset spaces. It is shown how to construct line bundles over these spaces, compute their properties and build up vector bundles consistent with supersymmetry and anomaly cancelation. It turns out that the most interesting coset is SU(3)/U(1)2. This space supports a large number of vector bundles which lead to consistent heterotic vacua, some of them with three chiral families.

  4. Dual Vector Spaces and Physical Singularities

    NASA Astrophysics Data System (ADS)

    Rowlands, Peter

    Though we often refer to 3-D vector space as constructed from points, there is no mechanism from within its definition for doing this. In particular, space, on its own, cannot accommodate the singularities that we call fundamental particles. This requires a commutative combination of space as we know it with another 3-D vector space, which is dual to the first (in a physical sense). The combination of the two spaces generates a nilpotent quantum mechanics/quantum field theory, which incorporates exact supersymmetry and ultimately removes the anomalies due to self-interaction. Among the many natural consequences of the dual space formalism are half-integral spin for fermions, zitterbewegung, Berry phase and a zero norm Berwald-Moor metric for fermionic states.

  5. Repellents and New “Spaces of Concern” in Global Health

    PubMed Central

    Kelly, Ann H.; Koudakossi, Hermione N. Boko; Moore, Sarah J.

    2017-01-01

    ABSTRACT Today, malaria prevention hinges upon two domestic interventions: insecticide-treated bed nets and indoor residual spraying. As mosquitoes grow resistant to these tools, however, novel approaches to vector control have become a priority area of malaria research and development. Spatial repellency, a volumetric mode of action that seeks to reduce disease transmission by creating an atmosphere inimical to mosquitoes, represents one way forward. Drawing from research that sought to develop new repellent chemicals in conversation with users from sub-Saharan Africa and the United States, we consider the implications of a non-insecticidal paradigm of vector control for how we understand the political ecology of malaria. PMID:28594568

  6. Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics

    NASA Astrophysics Data System (ADS)

    Kordy, Michal Adam

    The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the case of the right hand analytically dependent on frequency. The operator's null space is treated by decomposing the solution into the part in the null space and orthogonal to it.

  7. Strategy for reliable strain measurement in InAs/GaAs materials from high-resolution Z-contrast STEM images

    NASA Astrophysics Data System (ADS)

    Vatanparast, Maryam; Vullum, Per Erik; Nord, Magnus; Zuo, Jian-Min; Reenaas, Turid W.; Holmestad, Randi

    2017-09-01

    Geometric phase analysis (GPA), a fast and simple Fourier space method for strain analysis, can give useful information on accumulated strain and defect propagation in multiple layers of semiconductors, including quantum dot materials. In this work, GPA has been applied to high resolution Z-contrast scanning transmission electron microscopy (STEM) images. Strain maps determined from different g vectors of these images are compared to each other, in order to analyze and assess the GPA technique in terms of accuracy. The SmartAlign tool has been used to improve the STEM image quality getting more reliable results. Strain maps from template matching as a real space approach are compared with strain maps from GPA, and it is discussed that a real space analysis is a better approach than GPA for aberration corrected STEM images.

  8. The geometric approach to sets of ordinary differential equations and Hamiltonian dynamics

    NASA Technical Reports Server (NTRS)

    Estabrook, F. B.; Wahlquist, H. D.

    1975-01-01

    The calculus of differential forms is used to discuss the local integration theory of a general set of autonomous first order ordinary differential equations. Geometrically, such a set is a vector field V in the space of dependent variables. Integration consists of seeking associated geometric structures invariant along V: scalar fields, forms, vectors, and integrals over subspaces. It is shown that to any field V can be associated a Hamiltonian structure of forms if, when dealing with an odd number of dependent variables, an arbitrary equation of constraint is also added. Families of integral invariants are an immediate consequence. Poisson brackets are isomorphic to Lie products of associated CT-generating vector fields. Hamilton's variational principle follows from the fact that the maximal regular integral manifolds of a closed set of forms must include the characteristics of the set.

  9. Time-Parallel Solutions to Ordinary Differential Equations on GPUs with a New Functional Optimization Approach Related to the Sobolev Gradient Method

    DTIC Science & Technology

    2012-10-01

    black and approximations in cyan and magenta. The second ODE is the pendulum equation, given by: This ODE was also implemented using Crank...The drawback of approaches like the one proposed can be observed with a very simple example. Suppose vector is found by applying 4 linear...public release; distribution unlimited Figure 2. A phase space plot of the Pendulum example. Fine solution (black) contains 32768 time steps

  10. Effects of OCR Errors on Ranking and Feedback Using the Vector Space Model.

    ERIC Educational Resources Information Center

    Taghva, Kazem; And Others

    1996-01-01

    Reports on the performance of the vector space model in the presence of OCR (optical character recognition) errors in information retrieval. Highlights include precision and recall, a full-text test collection, smart vector representation, impact of weighting parameters, ranking variability, and the effect of relevance feedback. (Author/LRW)

  11. Angular motion estimation using dynamic models in a gyro-free inertial measurement unit.

    PubMed

    Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar

    2012-01-01

    In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters.

  12. Angular Motion Estimation Using Dynamic Models in a Gyro-Free Inertial Measurement Unit

    PubMed Central

    Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar

    2012-01-01

    In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters. PMID:22778586

  13. Solution of the determinantal assignment problem using the Grassmann matrices

    NASA Astrophysics Data System (ADS)

    Karcanias, Nicos; Leventides, John

    2016-02-01

    The paper provides a direct solution to the determinantal assignment problem (DAP) which unifies all frequency assignment problems of the linear control theory. The current approach is based on the solvability of the exterior equation ? where ? is an n -dimensional vector space over ? which is an integral part of the solution of DAP. New criteria for existence of solution and their computation based on the properties of structured matrices are referred to as Grassmann matrices. The solvability of this exterior equation is referred to as decomposability of ?, and it is in turn characterised by the set of quadratic Plücker relations (QPRs) describing the Grassmann variety of the corresponding projective space. Alternative new tests for decomposability of the multi-vector ? are given in terms of the rank properties of the Grassmann matrix, ? of the vector ?, which is constructed by the coordinates of ?. It is shown that the exterior equation is solvable (? is decomposable), if and only if ? where ?; the solution space for a decomposable ?, is the space ?. This provides an alternative linear algebra characterisation of the decomposability problem and of the Grassmann variety to that defined by the QPRs. Further properties of the Grassmann matrices are explored by defining the Hodge-Grassmann matrix as the dual of the Grassmann matrix. The connections of the Hodge-Grassmann matrix to the solution of exterior equations are examined, and an alternative new characterisation of decomposability is given in terms of the dimension of its image space. The framework based on the Grassmann matrices provides the means for the development of a new computational method for the solutions of the exact DAP (when such solutions exist), as well as computing approximate solutions, when exact solutions do not exist.

  14. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest

    PubMed Central

    Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-01-01

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548

  15. Geometric phase of mixed states for three-level open systems

    NASA Astrophysics Data System (ADS)

    Jiang, Yanyan; Ji, Y. H.; Xu, Hualan; Hu, Li-Yun; Wang, Z. S.; Chen, Z. Q.; Guo, L. P.

    2010-12-01

    Geometric phase of mixed state for three-level open system is defined by establishing in connecting density matrix with nonunit vector ray in a three-dimensional complex Hilbert space. Because the geometric phase depends only on the smooth curve on this space, it is formulated entirely in terms of geometric structures. Under the limiting of pure state, our approach is in agreement with the Berry phase, Pantcharatnam phase, and Aharonov and Anandan phase. We find that, furthermore, the Berry phase of mixed state correlated to population inversions of three-level open system.

  16. Exploring Protein Dynamics Space: The Dynasome as the Missing Link between Protein Structure and Function

    PubMed Central

    Hensen, Ulf; Meyer, Tim; Haas, Jürgen; Rex, René; Vriend, Gert; Grubmüller, Helmut

    2012-01-01

    Proteins are usually described and classified according to amino acid sequence, structure or function. Here, we develop a minimally biased scheme to compare and classify proteins according to their internal mobility patterns. This approach is based on the notion that proteins not only fold into recurring structural motifs but might also be carrying out only a limited set of recurring mobility motifs. The complete set of these patterns, which we tentatively call the dynasome, spans a multi-dimensional space with axes, the dynasome descriptors, characterizing different aspects of protein dynamics. The unique dynamic fingerprint of each protein is represented as a vector in the dynasome space. The difference between any two vectors, consequently, gives a reliable measure of the difference between the corresponding protein dynamics. We characterize the properties of the dynasome by comparing the dynamics fingerprints obtained from molecular dynamics simulations of 112 proteins but our approach is, in principle, not restricted to any specific source of data of protein dynamics. We conclude that: 1. the dynasome consists of a continuum of proteins, rather than well separated classes. 2. For the majority of proteins we observe strong correlations between structure and dynamics. 3. Proteins with similar function carry out similar dynamics, which suggests a new method to improve protein function annotation based on protein dynamics. PMID:22606222

  17. Computational Approaches to Image Understanding.

    DTIC Science & Technology

    1981-10-01

    represnting points, edges, surfaces, and volumes to facilitate display. The geometry or perspective and parailcl (or orthographic) projection has...of making the image forming process explicit. This in turn leads to a concern with geometry , such as the properties f the gradient, stereographic, and...dual spaces. Combining geometry and smoothness leads naturally to multi-variate vector analysis, and to differential geometry . For the most part, a

  18. The polarization evolution of electromagnetic waves as a diagnostic method for a motional plasma

    NASA Astrophysics Data System (ADS)

    Shahrokhi, Alireza; Mehdian, Hassan; Hajisharifi, Kamal; Hasanbeigi, Ali

    2017-12-01

    The polarization evolution of electromagnetic (EM) radiation propagating through an electron beam-ion channel system is studied in the presence of self-magnetic field. Solving the fluid-Maxwell equations to obtain the medium dielectric tensor, the Stokes vector-Mueller matrix approach is employed to determine the polarization of the launched EM wave at any point in the propagation direction, applying the space-dependent Mueller matrix on the initial polarization vector of the wave at the plasma-vacuum interface. Results show that the polarization evolution of the wave is periodic in space along the beam axis with the specified polarization wavelength. Using the obtained results, a novel diagnostic method based on the polarization evolution of the EM waves is proposed to evaluate the electron beam density and velocity. Moreover, to use the mentioned plasma system as a polarizer, the fraction of the output radiation power transmitted through a motional plasma crossed with the input polarization is calculated. The results of the present investigation will greatly contribute to design a new EM amplifier with fixed polarization or EM polarizer, as well as a new diagnostic approach for the electron beam system where the polarimetric method is employed.

  19. All ASD complex and real 4-dimensional Einstein spaces with Λ≠0 admitting a nonnull Killing vector

    NASA Astrophysics Data System (ADS)

    Chudecki, Adam

    2016-12-01

    Anti-self-dual (ASD) 4-dimensional complex Einstein spaces with nonzero cosmological constant Λ equipped with a nonnull Killing vector are considered. It is shown that any conformally nonflat metric of such spaces can be always brought to a special form and the Einstein field equations can be reduced to the Boyer-Finley-Plebański equation (Toda field equation). Some alternative forms of the metric are discussed. All possible real slices (neutral, Euclidean and Lorentzian) of ASD complex Einstein spaces with Λ≠0 admitting a nonnull Killing vector are found.

  20. Unified control/structure design and modeling research

    NASA Technical Reports Server (NTRS)

    Mingori, D. L.; Gibson, J. S.; Blelloch, P. A.; Adamian, A.

    1986-01-01

    To demonstrate the applicability of the control theory for distributed systems to large flexible space structures, research was focused on a model of a space antenna which consists of a rigid hub, flexible ribs, and a mesh reflecting surface. The space antenna model used is discussed along with the finite element approximation of the distributed model. The basic control problem is to design an optimal or near-optimal compensator to suppress the linear vibrations and rigid-body displacements of the structure. The application of an infinite dimensional Linear Quadratic Gaussian (LQG) control theory to flexible structure is discussed. Two basic approaches for robustness enhancement were investigated: loop transfer recovery and sensitivity optimization. A third approach synthesized from elements of these two basic approaches is currently under development. The control driven finite element approximation of flexible structures is discussed. Three sets of finite element basic vectors for computing functional control gains are compared. The possibility of constructing a finite element scheme to approximate the infinite dimensional Hamiltonian system directly, instead of indirectly is discussed.

  1. A Space Affine Matching Approach to fMRI Time Series Analysis.

    PubMed

    Chen, Liang; Zhang, Weishi; Liu, Hongbo; Feng, Shigang; Chen, C L Philip; Wang, Huili

    2016-07-01

    For fMRI time series analysis, an important challenge is to overcome the potential delay between hemodynamic response signal and cognitive stimuli signal, namely the same frequency but different phase (SFDP) problem. In this paper, a novel space affine matching feature is presented by introducing the time domain and frequency domain features. The time domain feature is used to discern different stimuli, while the frequency domain feature to eliminate the delay. And then we propose a space affine matching (SAM) algorithm to match fMRI time series by our affine feature, in which a normal vector is estimated using gradient descent to explore the time series matching optimally. The experimental results illustrate that the SAM algorithm is insensitive to the delay between the hemodynamic response signal and the cognitive stimuli signal. Our approach significantly outperforms GLM method while there exists the delay. The approach can help us solve the SFDP problem in fMRI time series matching and thus of great promise to reveal brain dynamics.

  2. Close Approach Prediction Analysis of the Earth Science Constellation with the Fengyun-1C Debris

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Rand, David K.

    2008-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. Each day, close approach predictions are generated by a U.S. Department of Defense Joint Space Operations Center Orbital Safety Analyst using the high accuracy Space Object Catalog maintained by the Air Force's 1" Space Control Squadron. Prediction results and other ancillary data such as state vector information are sent to NASAJGoddard Space Flight Center's (GSFC's) Collision Risk Assessment analysis team for review. Collision analysis is performed and the GSFC team works with the ESC member missions to develop risk reduction strategies as necessary. This paper presents various close approach statistics for the ESC. The ESC missions have been affected by debris from the recent anti-satellite test which destroyed the Chinese Fengyun- 1 C satellite. The paper also presents the percentage of close approach events induced by the Fengyun-1C debris, and presents analysis results which predict the future effects on the ESC caused by this event. Specifically, the Fengyun-1C debris is propagated for twenty years using high-performance computing technology and close approach predictions are generated for the ESC. The percent increase in the total number of conjunction events is considered to be an estimate of the collision risk due to the Fengyun-1C break- UP.

  3. Combining the Complete Active Space Self-Consistent Field Method and the Full Configuration Interaction Quantum Monte Carlo within a Super-CI Framework, with Application to Challenging Metal-Porphyrins.

    PubMed

    Li Manni, Giovanni; Smart, Simon D; Alavi, Ali

    2016-03-08

    A novel stochastic Complete Active Space Self-Consistent Field (CASSCF) method has been developed and implemented in the Molcas software package. A two-step procedure is used, in which the CAS configuration interaction secular equations are solved stochastically with the Full Configuration Interaction Quantum Monte Carlo (FCIQMC) approach, while orbital rotations are performed using an approximated form of the Super-CI method. This new method does not suffer from the strong combinatorial limitations of standard MCSCF implementations using direct schemes and can handle active spaces well in excess of those accessible to traditional CASSCF approaches. The density matrix formulation of the Super-CI method makes this step independent of the size of the CI expansion, depending exclusively on one- and two-body density matrices with indices restricted to the relatively small number of active orbitals. No sigma vectors need to be stored in memory for the FCIQMC eigensolver--a substantial gain in comparison to implementations using the Davidson method, which require three or more vectors of the size of the CI expansion. Further, no orbital Hessian is computed, circumventing limitations on basis set expansions. Like the parent FCIQMC method, the present technique is scalable on massively parallel architectures. We present in this report the method and its application to the free-base porphyrin, Mg(II) porphyrin, and Fe(II) porphyrin. In the present study, active spaces up to 32 electrons and 29 orbitals in orbital expansions containing up to 916 contracted functions are treated with modest computational resources. Results are quite promising even without accounting for the correlation outside the active space. The systems here presented clearly demonstrate that large CASSCF calculations are possible via FCIQMC-CASSCF without limitations on basis set size.

  4. Determination of displacements and their derivatives from 3D fringe patterns via extended monogenic phasor method

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Lamberti, Luciano

    2018-05-01

    For 1D signals, it is necessary to resort to a 2D abstract space because the concept of phase utilized in the retrieval of fringe pattern analysis information relies on the use of a vectorial function. Fourier and Hilbert transforms provide in-quadrature signals that lead to the very important basic concept of local phase. A 3D abstract space must hence be generated in order to analyze 2D signals. A 3D vector space in a Cartesian complex space is graphically represented by a Poincare sphere. In this study, the extension of the associated spaces is extended to 3D. A 4D hypersphere is defined for that purpose. The proposed approach is illustrated by determining the deformations of the heart left ventricle.

  5. Learning with LOGO: Logo and Vectors.

    ERIC Educational Resources Information Center

    Lough, Tom; Tipps, Steve

    1986-01-01

    This is the first of a two-part series on the general concept of vector space. Provides tool procedures to allow investigation of vector properties, vector addition and subtraction, and X and Y components. Lists several sources of additional vector ideas. (JM)

  6. Principal fiber bundle description of number scaling for scalars and vectors: application to gauge theory

    NASA Astrophysics Data System (ADS)

    Benioff, Paul

    2015-05-01

    The purpose of this paper is to put the description of number scaling and its effects on physics and geometry on a firmer foundation, and to make it more understandable. A main point is that two different concepts, number and number value are combined in the usual representations of number structures. This is valid as long as just one structure of each number type is being considered. It is not valid when different structures of each number type are being considered. Elements of base sets of number structures, considered by themselves, have no meaning. They acquire meaning or value as elements of a number structure. Fiber bundles over a space or space time manifold, M, are described. The fiber consists of a collection of many real or complex number structures and vector space structures. The structures are parameterized by a real or complex scaling factor, s. A vector space at a fiber level, s, has, as scalars, real or complex number structures at the same level. Connections are described that relate scalar and vector space structures at both neighbor M locations and at neighbor scaling levels. Scalar and vector structure valued fields are described and covariant derivatives of these fields are obtained. Two complex vector fields, each with one real and one imaginary field, appear, with one complex field associated with positions in M and the other with position dependent scaling factors. A derivation of the covariant derivative for scalar and vector valued fields gives the same vector fields. The derivation shows that the complex vector field associated with scaling fiber levels is the gradient of a complex scalar field. Use of these results in gauge theory shows that the imaginary part of the vector field associated with M positions acts like the electromagnetic field. The physical relevance of the other three fields, if any, is not known.

  7. Managing the resilience space of the German energy system - A vector analysis.

    PubMed

    Schlör, Holger; Venghaus, Sandra; Märker, Carolin; Hake, Jürgen-Friedrich

    2018-07-15

    The UN Sustainable Development Goals formulated in 2016 confirmed the sustainability concept of the Earth Summit of 1992 and supported UNEP's green economy transition concept. The transformation of the energy system (Energiewende) is the keystone of Germany's sustainability strategy and of the German green economy concept. We use ten updated energy-related indicators of the German sustainability strategy to analyse the German energy system. The development of the sustainable indicators is examined in the monitoring process by a vector analysis performed in two-dimensional Euclidean space (Euclidean plane). The aim of the novel vector analysis is to measure the current status of the Energiewende in Germany and thereby provide decision makers with information about the strains for the specific remaining pathway of the single indicators and of the total system in order to meet the sustainability targets of the Energiewende. Within this vector model, three vectors (the normative sustainable development vector, the real development vector, and the green economy vector) define the resilience space of our analysis. The resilience space encloses a number of vectors representing different pathways with different technological and socio-economic strains to achieve a sustainable development of the green economy. In this space, the decision will be made as to whether the government measures will lead to a resilient energy system or whether a readjustment of indicator targets or political measures is necessary. The vector analysis enables us to analyse both the government's ambitiousness, which is expressed in the sustainability target for the indicators at the start of the sustainability strategy representing the starting preference order of the German government (SPO) and, secondly, the current preference order of German society in order to bridge the remaining distance to reach the specific sustainability goals of the strategy summarized in the current preference order (CPO). Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Oversampling the Minority Class in the Feature Space.

    PubMed

    Perez-Ortiz, Maria; Gutierrez, Pedro Antonio; Tino, Peter; Hervas-Martinez, Cesar

    2016-09-01

    The imbalanced nature of some real-world data is one of the current challenges for machine learning researchers. One common approach oversamples the minority class through convex combination of its patterns. We explore the general idea of synthetic oversampling in the feature space induced by a kernel function (as opposed to input space). If the kernel function matches the underlying problem, the classes will be linearly separable and synthetically generated patterns will lie on the minority class region. Since the feature space is not directly accessible, we use the empirical feature space (EFS) (a Euclidean space isomorphic to the feature space) for oversampling purposes. The proposed method is framed in the context of support vector machines, where the imbalanced data sets can pose a serious hindrance. The idea is investigated in three scenarios: 1) oversampling in the full and reduced-rank EFSs; 2) a kernel learning technique maximizing the data class separation to study the influence of the feature space structure (implicitly defined by the kernel function); and 3) a unified framework for preferential oversampling that spans some of the previous approaches in the literature. We support our investigation with extensive experiments over 50 imbalanced data sets.

  9. Analysis of the sensitivity properties of a model of vector-borne bubonic plague.

    PubMed

    Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald

    2008-09-06

    Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.

  10. Development and evaluation of a biomedical search engine using a predicate-based vector space model.

    PubMed

    Kwak, Myungjae; Leroy, Gondy; Martinez, Jesse D; Harwell, Jeffrey

    2013-10-01

    Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0-5 point scale to calculate precision (0 versus higher) and relevance (0-5 score). Precision was significantly higher (p<.001) for the predicate-based (80%) than for the keyword-based (71%) approach. Relevance was almost doubled with the predicate-based approach-2.1 versus 1.6 without rank order adjustment (p<.001) and 1.34 versus 0.98 with rank order adjustment (p<.001) for predicate--versus keyword-based approach respectively. Predicates can support more precise searching than keywords, laying the foundation for rich and sophisticated information search. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Families of vector-like deformations of relativistic quantum phase spaces, twists and symmetries

    NASA Astrophysics Data System (ADS)

    Meljanac, Daniel; Meljanac, Stjepan; Pikutić, Danijel

    2017-12-01

    Families of vector-like deformed relativistic quantum phase spaces and corresponding realizations are analyzed. A method for a general construction of the star product is presented. The corresponding twist, expressed in terms of phase space coordinates, in the Hopf algebroid sense is presented. General linear realizations are considered and corresponding twists, in terms of momenta and Poincaré-Weyl generators or gl(n) generators are constructed and R-matrix is discussed. A classification of linear realizations leading to vector-like deformed phase spaces is given. There are three types of spaces: (i) commutative spaces, (ii) κ -Minkowski spaces and (iii) κ -Snyder spaces. The corresponding star products are (i) associative and commutative (but non-local), (ii) associative and non-commutative and (iii) non-associative and non-commutative, respectively. Twisted symmetry algebras are considered. Transposed twists and left-right dual algebras are presented. Finally, some physical applications are discussed.

  12. Time-specific ecological niche modeling predicts spatial dynamics of vector insects and human dengue cases.

    PubMed

    Peterson, A Townsend; Martínez-Campos, Carmen; Nakazawa, Yoshinori; Martínez-Meyer, Enrique

    2005-09-01

    Numerous human diseases-malaria, dengue, yellow fever and leishmaniasis, to name a few-are transmitted by insect vectors with brief life cycles and biting activity that varies in both space and time. Although the general geographic distributions of these epidemiologically important species are known, the spatiotemporal variation in their emergence and activity remains poorly understood. We used ecological niche modeling via a genetic algorithm to produce time-specific predictive models of monthly distributions of Aedes aegypti in Mexico in 1995. Significant predictions of monthly mosquito activity and distributions indicate that predicting spatiotemporal dynamics of disease vector species is feasible; significant coincidence with human cases of dengue indicate that these dynamics probably translate directly into transmission of dengue virus to humans. This approach provides new potential for optimizing use of resources for disease prevention and remediation via automated forecasting of disease transmission risk.

  13. Vector and Tensor Analyzing Powers in Deuteron-Proton Breakup

    NASA Astrophysics Data System (ADS)

    Stephan, E.; Kistryn, St.; Kalantar-Nayestanaki, N.; Biegun, A.; Bodek, K.; Ciepał, I.; Deltuva, A.; Eslami-Kalantari, M.; Fonseca, A. C.; Gasparić, I.; Golak, J.; Jamróz, B.; Joulaeizadeh, L.; Kamada, H.; Kiš, M.; Kłos, B.; Kozela, A.; Mahjour-Shafiei, M.; Mardanpour, H.; Messchendorp, J.; Micherdzińska, A.; Moeini, H.; Nogga, A.; Ramazani-Moghaddam-Arani, A.; Skibiński, R.; Sworst, R.; Witała, H.; Zejma, J.

    2011-05-01

    High precision data for vector and tensor analyzing powers of the {^1{H}({d},{{pp}}){n}} breakup reaction at 130 and 100 MeV deuteron beam energies have been measured in a large fraction of the phase space. They are compared to the theoretical predictions based on various approaches to describe the three nucleon (3N) system dynamics. Theoretical predictions describe very well the vector analyzing power data, with no need to include any three-nucleon force effects for these observables. Tensor analyzing powers can be also very well reproduced by calculations in most of the studied region, but locally certain discrepancies are observed. At 130 MeV for A xy such discrepancies usually appear, or are enhanced, when model 3N forces are included. Predicted effects of 3NFs are much lower at 100 MeV and at this energy equally good consistency between the data and the calculations is obtained with or without 3NFs.

  14. Constrained multibody system dynamics: An automated approach

    NASA Technical Reports Server (NTRS)

    Kamman, J. W.; Huston, R. L.

    1982-01-01

    The governing equations for constrained multibody systems are formulated in a manner suitable for their automated, numerical development and solution. The closed loop problem of multibody chain systems is addressed. The governing equations are developed by modifying dynamical equations obtained from Lagrange's form of d'Alembert's principle. The modifications is based upon a solution of the constraint equations obtained through a zero eigenvalues theorem, is a contraction of the dynamical equations. For a system with n-generalized coordinates and m-constraint equations, the coefficients in the constraint equations may be viewed as constraint vectors in n-dimensional space. In this setting the system itself is free to move in the n-m directions which are orthogonal to the constraint vectors.

  15. Deorbit targeting

    NASA Technical Reports Server (NTRS)

    Tempelman, W. H.

    1973-01-01

    The navigation and control of the space shuttle during atmospheric entry are discussed. A functional flow diagram presenting the basic approach to the deorbit targeting problem is presented. The major inputs to be considered are: (1) vehicle state vector, (2) landing site location, (3) entry interface parameters, (4) earliest desired time of landing, and (5) maximum cross range. Mathematical models of the navigational procedures based on controlled thrust times are developed.

  16. Projective mappings and dimensions of vector spaces of three types of Killing-Yano tensors on pseudo Riemannian manifolds of constant curvature

    NASA Astrophysics Data System (ADS)

    Mikeš, Josef; Stepanov, Sergey; Hinterleitner, Irena

    2012-07-01

    In our paper we have determined the dimension of the space of conformal Killing-Yano tensors and the dimensions of its two subspaces of closed conformal Killing-Yano and Killing-Yano tensors on pseudo Riemannian manifolds of constant curvature. This result is a generalization of well known results on sharp upper bounds of the dimensions of the vector spaces of conformal Killing-Yano, Killing-Yano and concircular vector fields on pseudo Riemannian manifolds of constant curvature.

  17. Energy Dissipation of Rayleigh Waves due to Absorption Along the Path by the Use of Finite Element Method

    DTIC Science & Technology

    1979-07-31

    3 x 3 t Strain vector a ij,j Space derivative of the stress tensor Fi Force vector per unit volume o Density x CHAPTER III F Total force K Stiffness...matrix 6Vector displacements M Mass matrix B Space operating matrix DO Matrix moduli 2 x 3 DZ Operating matrix in Z direction N Matrix of shape...dissipating medium the deformation of a solid is a function of time, temperature and space . Creep phenomenon is a deformation process in which there is

  18. The Sequential Implementation of Array Processors when there is Directional Uncertainty

    DTIC Science & Technology

    1975-08-01

    University of Washington kindly supplied office space and ccputing facilities. -The author hat, benefited greatly from discussions with several other...if i Q- inverse of Q I L general observation space R general vector of observation _KR general observation vector of dimension K Exiv] "Tf -- ’ -"-T’T...7" i ’i ’:"’ - ’ ; ’ ’ ’ ’ ’ ’" ’"- Glossary of Symbols (continued) R. ith observation 1 Rm real vector space of dimension m R(T) autocorrelation

  19. Application of neuroanatomical features to tractography clustering.

    PubMed

    Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang

    2013-09-01

    Diffusion tensor imaging allows unprecedented insight into brain neural connectivity in vivo by allowing reconstruction of neuronal tracts via captured patterns of water diffusion in white matter microstructures. However, tractography algorithms often output hundreds of thousands of fibers, rendering subsequent data analysis intractable. As a remedy, fiber clustering techniques are able to group fibers into dozens of bundles and thus facilitate analyses. Most existing fiber clustering methods rely on geometrical information of fibers, by viewing them as curves in 3D Euclidean space. The important neuroanatomical aspect of fibers, however, is ignored. In this article, the neuroanatomical information of each fiber is encapsulated in the associativity vector, which functions as the unique "fingerprint" of the fiber. Specifically, each entry in the associativity vector describes the relationship between the fiber and a certain anatomical ROI in a fuzzy manner. The value of the entry approaches 1 if the fiber is spatially related to the ROI at high confidence; on the contrary, the value drops closer to 0. The confidence of the ROI is calculated by diffusing the ROI according to the underlying fibers from tractography. In particular, we have adopted the fast marching method for simulation of ROI diffusion. Using the associativity vectors of fibers, we further model fibers as observations sampled from multivariate Gaussian mixtures in the feature space. To group all fibers into relevant major bundles, an expectation-maximization clustering approach is employed. Experimental results indicate that our method results in anatomically meaningful bundles that are highly consistent across subjects. Copyright © 2012 Wiley Periodicals, Inc., a Wiley company.

  20. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set.

    PubMed

    Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang

    2017-04-26

    This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.

  1. Capabilities of software "Vector-M" for a diagnostics of the ionosphere state from auroral emissions images and plasma characteristics from the different orbits as a part of the system of control of space weather

    NASA Astrophysics Data System (ADS)

    Avdyushev, V.; Banshchikova, M.; Chuvashov, I.; Kuzmin, A.

    2017-09-01

    In the paper are presented capabilities of software "Vector-M" for a diagnostics of the ionosphere state from auroral emissions images and plasma characteristics from the different orbits as a part of the system of control of space weather. The software "Vector-M" is developed by the celestial mechanics and astrometry department of Tomsk State University in collaboration with Space Research Institute (Moscow) and Central Aerological Observatory of Russian Federal Service for Hydrometeorology and Environmental Monitoring. The software "Vector-M" is intended for calculation of attendant geophysical and astronomical information for the centre of mass of the spacecraft and the space of observations in the experiment with auroral imager Aurovisor-VIS/MP in the orbit of the perspective Meteor-MP spacecraft.

  2. Beamspace dual signal space projection (bDSSP): a method for selective detection of deep sources in MEG measurements

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K.; Cai, Chang; Nagarajan, Srikantan S.

    2018-06-01

    Objective. Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. Approach. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Main results. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. Significance. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.

  3. Spatial and temporal variation of life-history traits documented using capture-mark-recapture methods in the vector snail Bulinus truncatus.

    PubMed

    Chlyeh, G; Henry, P Y; Jarne, P

    2003-09-01

    The population biology of the schistosome-vector snail Bulinus truncatus was studied in an irrigation area near Marrakech, Morocco, using demographic approaches, in order to estimate life-history parameters. The survey was conducted using 2 capture-mark-recapture analyses in 2 separate sites from the irrigation area, the first one in 1999 and the second one in 2000. Individuals larger than 5 mm were considered. The capture probability varied through time and space in both analyses. Apparent survival (from 0.7 to 1 per period of 2-3 days) varied with time and space (a series of sinks was considered), as well as a square function of size. These results suggest variation in population intrinsic rate of increase. They also suggest that results from more classical analyses of population demography, aiming, for example at estimating population size, should be interpreted with caution. Together with other results obtained in the same irrigation area, they also lead to some suggestions for population control.

  4. Ontology-based vector space model and fuzzy query expansion to retrieve knowledge on medical computational problem solutions.

    PubMed

    Bratsas, Charalampos; Koutkias, Vassilis; Kaimakamis, Evangelos; Bamidis, Panagiotis; Maglaveras, Nicos

    2007-01-01

    Medical Computational Problem (MCP) solving is related to medical problems and their computerized algorithmic solutions. In this paper, an extension of an ontology-based model to fuzzy logic is presented, as a means to enhance the information retrieval (IR) procedure in semantic management of MCPs. We present herein the methodology followed for the fuzzy expansion of the ontology model, the fuzzy query expansion procedure, as well as an appropriate ontology-based Vector Space Model (VSM) that was constructed for efficient mapping of user-defined MCP search criteria and MCP acquired knowledge. The relevant fuzzy thesaurus is constructed by calculating the simultaneous occurrences of terms and the term-to-term similarities derived from the ontology that utilizes UMLS (Unified Medical Language System) concepts by using Concept Unique Identifiers (CUI), synonyms, semantic types, and broader-narrower relationships for fuzzy query expansion. The current approach constitutes a sophisticated advance for effective, semantics-based MCP-related IR.

  5. Wideband optical vector network analyzer based on optical single-sideband modulation and optical frequency comb.

    PubMed

    Xue, Min; Pan, Shilong; He, Chao; Guo, Ronghui; Zhao, Yongjiu

    2013-11-15

    A novel approach to increase the measurement range of the optical vector network analyzer (OVNA) based on optical single-sideband (OSSB) modulation is proposed and experimentally demonstrated. In the proposed system, each comb line in an optical frequency comb (OFC) is selected by an optical filter and used as the optical carrier for the OSSB-based OVNA. The frequency responses of an optical device-under-test (ODUT) are thus measured channel by channel. Because the comb lines in the OFC have fixed frequency spacing, by fitting the responses measured in all channels together, the magnitude and phase responses of the ODUT can be accurately achieved in a large range. A proof-of-concept experiment is performed. A measurement range of 105 GHz and a resolution of 1 MHz is achieved when a five-comb-line OFC with a frequency spacing of 20 GHz is applied to measure the magnitude and phase responses of a fiber Bragg grating.

  6. General very special relativity in Finsler cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouretsis, A. P.; Stathakopoulos, M.; Stavrinos, P. C.

    2009-05-15

    General very special relativity (GVSR) is the curved space-time of very special relativity (VSR) proposed by Cohen and Glashow. The geometry of general very special relativity possesses a line element of Finsler geometry introduced by Bogoslovsky. We calculate the Einstein field equations and derive a modified Friedmann-Robertson-Walker cosmology for an osculating Riemannian space. The Friedmann equation of motion leads to an explanation of the cosmological acceleration in terms of an alternative non-Lorentz invariant theory. A first order approach for a primordial-spurionic vector field introduced into the metric gives back an estimation of the energy evolution and inflation.

  7. NUDTSNA at TREC 2015 Microblog Track: A Live Retrieval System Framework for Social Network based on Semantic Expansion and Quality Model

    DTIC Science & Technology

    2015-11-20

    between tweets and profiles as follow, • TFIDF Score, which calculates the cosine similarity between a tweet and a profile in vector space model with...TFIDF weight of terms. Vector space model is a model which represents a document as a vector. Tweets and profiles can be expressed as vectors, ~ T = (t...gain(Tr i ) (13) where Tr is the returned tweet sets, gain() is the score func- tion for a tweet. Not interesting, spam/ junk tweets receive a gain of 0

  8. A transversal approach to predict gene product networks from ontology-based similarity

    PubMed Central

    Chabalier, Julie; Mosser, Jean; Burgun, Anita

    2007-01-01

    Background Interpretation of transcriptomic data is usually made through a "standard" approach which consists in clustering the genes according to their expression patterns and exploiting Gene Ontology (GO) annotations within each expression cluster. This approach makes it difficult to underline functional relationships between gene products that belong to different expression clusters. To address this issue, we propose a transversal analysis that aims to predict functional networks based on a combination of GO processes and data expression. Results The transversal approach presented in this paper consists in computing the semantic similarity between gene products in a Vector Space Model. Through a weighting scheme over the annotations, we take into account the representativity of the terms that annotate a gene product. Comparing annotation vectors results in a matrix of gene product similarities. Combined with expression data, the matrix is displayed as a set of functional gene networks. The transversal approach was applied to 186 genes related to the enterocyte differentiation stages. This approach resulted in 18 functional networks proved to be biologically relevant. These results were compared with those obtained through a standard approach and with an approach based on information content similarity. Conclusion Complementary to the standard approach, the transversal approach offers new insight into the cellular mechanisms and reveals new research hypotheses by combining gene product networks based on semantic similarity, and data expression. PMID:17605807

  9. Trends in space activities in 2014: The significance of the space activities of governments

    NASA Astrophysics Data System (ADS)

    Paikowsky, Deganit; Baram, Gil; Ben-Israel, Isaac

    2016-01-01

    This article addresses the principal events of 2014 in the field of space activities, and extrapolates from them the primary trends that can be identified in governmental space activities. In 2014, global space activities centered on two vectors. The first was geopolitical, and the second relates to the matrix between increasing commercial space activities and traditional governmental space activities. In light of these two vectors, the article outlines and analyzes trends of space exploration, human spaceflights, industry and technology, cooperation versus self-reliance, and space security and sustainability. It also reviews the space activities of the leading space-faring nations.

  10. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  11. Climate impacts on environmental risks evaluated from space: a contribution to social benefits within the GEOSS Health Area: The case of Rift Valley Fever in Senegal

    NASA Astrophysics Data System (ADS)

    Tourre, Y. M.

    2009-12-01

    Climate and environment vary on many spatio-temporal scales, including climate change, with impacts on ecosystems, vector-borne diseases and public health worldwide. This study is to enable societal benefits from a conceptual approach by mapping climatic and environmental conditions from space and understanding the mechanisms within the Health Social Benefit GEOSS area. The case study is for Rift Valley Fever (RVF) epidemics in Senegal is presented. Ponds contributing to mosquitoes’ thriving, were identified from remote sensing using high-resolution SPOT-5 satellite images. Additional data on ponds’ dynamics and rainfall events (obtained from the Tropical Rainfall Measuring Mission) were combined with hydrological in-situ data. Localization of vulnerable hosts such as parked cattle (from QuickBird satellite) are also used. Dynamic spatio-temporal distribution of Aedes vexans density (one of the main RVF vectors) is based on the total rainfall amount and ponds’ dynamics. While Zones Potentially Occupied by Mosquitoes (ZPOM) are mapped, detailed risks areas, i.e. zones where hazards and vulnerability occur, are expressed in percentages of parks where cattle is potentially exposed to mosquitoes’ bites. This new conceptual approach, using remote-sensing techniques belonging to GEOSS, simply relies upon rainfall distribution also evaluated from space. It is meant to contribute to the implementation of integrated operational early warning system within the health application communities since climatic and environmental conditions (both natural and anthropogenic) are changing rapidly.

  12. Estimation of 3-D conduction velocity vector fields from cardiac mapping data.

    PubMed

    Barnette, A R; Bayly, P V; Zhang, S; Walcott, G P; Ideker, R E; Smith, W M

    2000-08-01

    A method to estimate three-dimensional (3-D) conduction velocity vector fields in cardiac tissue is presented. The speed and direction of propagation are found from polynomial "surfaces" fitted to space-time (x, y, z, t) coordinates of cardiac activity. The technique is applied to sinus rhythm and paced rhythm mapped with plunge needles at 396-466 sites in the canine myocardium. The method was validated on simulated 3-D plane and spherical waves. For simulated data, conduction velocities were estimated with an accuracy of 1%-2%. In experimental data, estimates of conduction speeds during paced rhythm were slower than those found during normal sinus rhythm. Vector directions were also found to differ between different types of beats. The technique was able to distinguish between premature ventricular contractions and sinus beats and between sinus and paced beats. The proposed approach to computing velocity vector fields provides an automated, physiological, and quantitative description of local electrical activity in 3-D tissue. This method may provide insight into abnormal conduction associated with fatal ventricular arrhythmias.

  13. Prediction of protein structural classes by Chou's pseudo amino acid composition: approached using continuous wavelet transform and principal component analysis.

    PubMed

    Li, Zhan-Chao; Zhou, Xi-Bin; Dai, Zong; Zou, Xiao-Yong

    2009-07-01

    A prior knowledge of protein structural classes can provide useful information about its overall structure, so it is very important for quick and accurate determination of protein structural class with computation method in protein science. One of the key for computation method is accurate protein sample representation. Here, based on the concept of Chou's pseudo-amino acid composition (AAC, Chou, Proteins: structure, function, and genetics, 43:246-255, 2001), a novel method of feature extraction that combined continuous wavelet transform (CWT) with principal component analysis (PCA) was introduced for the prediction of protein structural classes. Firstly, the digital signal was obtained by mapping each amino acid according to various physicochemical properties. Secondly, CWT was utilized to extract new feature vector based on wavelet power spectrum (WPS), which contains more abundant information of sequence order in frequency domain and time domain, and PCA was then used to reorganize the feature vector to decrease information redundancy and computational complexity. Finally, a pseudo-amino acid composition feature vector was further formed to represent primary sequence by coupling AAC vector with a set of new feature vector of WPS in an orthogonal space by PCA. As a showcase, the rigorous jackknife cross-validation test was performed on the working datasets. The results indicated that prediction quality has been improved, and the current approach of protein representation may serve as a useful complementary vehicle in classifying other attributes of proteins, such as enzyme family class, subcellular localization, membrane protein types and protein secondary structure, etc.

  14. Propagation and wavefront ambiguity of linear nondiffracting beams

    NASA Astrophysics Data System (ADS)

    Grunwald, R.; Bock, M.

    2014-02-01

    Ultrashort-pulsed Bessel and Airy beams in free space are often interpreted as "linear light bullets". Usually, interconnected intensity profiles are considered a "propagation" along arbitrary pathways which can even follow curved trajectories. A more detailed analysis, however, shows that this picture gives an adequate description only in situations which do not require to consider the transport of optical signals or causality. To also cover these special cases, a generalization of the terms "beam" and "propagation" is necessary. The problem becomes clearer by representing the angular spectra of the propagating wave fields by rays or Poynting vectors. It is known that quasi-nondiffracting beams can be described as caustics of ray bundles. Their decomposition into Poynting vectors by Shack-Hartmann sensors indicates that, in the frame of their classical definition, the corresponding local wavefronts are ambiguous and concepts based on energy density are not appropriate to describe the propagation completely. For this reason, quantitative parameters like the beam propagation factor have to be treated with caution as well. For applications like communication or optical computing, alternative descriptions are required. A heuristic approach based on vector field based information transport and Fourier analysis is proposed here. Continuity and discontinuity of far field distributions in space and time are discussed. Quantum aspects of propagation are briefly addressed.

  15. A novel and efficient technique for identification and classification of GPCRs.

    PubMed

    Gupta, Ravi; Mittal, Ankush; Singh, Kuldip

    2008-07-01

    G-protein coupled receptors (GPCRs) play a vital role in different biological processes, such as regulation of growth, death, and metabolism of cells. GPCRs are the focus of significant amount of current pharmaceutical research since they interact with more than 50% of prescription drugs. The dipeptide-based support vector machine (SVM) approach is the most accurate technique to identify and classify the GPCRs. However, this approach has two major disadvantages. First, the dimension of dipeptide-based feature vector is equal to 400. The large dimension makes the classification task computationally and memory wise inefficient. Second, it does not consider the biological properties of protein sequence for identification and classification of GPCRs. In this paper, we present a novel-feature-based SVM classification technique. The novel features are derived by applying wavelet-based time series analysis approach on protein sequences. The proposed feature space summarizes the variance information of seven important biological properties of amino acids in a protein sequence. In addition, the dimension of the feature vector for proposed technique is equal to 35. Experiments were performed on GPCRs protein sequences available at GPCRs Database. Our approach achieves an accuracy of 99.9%, 98.06%, 97.78%, and 94.08% for GPCR superfamily, families, subfamilies, and subsubfamilies (amine group), respectively, when evaluated using fivefold cross-validation. Further, an accuracy of 99.8%, 97.26%, and 97.84% was obtained when evaluated on unseen or recall datasets of GPCR superfamily, families, and subfamilies, respectively. Comparison with dipeptide-based SVM technique shows the effectiveness of our approach.

  16. Literature-based concept profiles for gene annotation: the issue of weighting.

    PubMed

    Jelier, Rob; Schuemie, Martijn J; Roes, Peter-Jan; van Mulligen, Erik M; Kors, Jan A

    2008-05-01

    Text-mining has been used to link biomedical concepts, such as genes or biological processes, to each other for annotation purposes or the generation of new hypotheses. To relate two concepts to each other several authors have used the vector space model, as vectors can be compared efficiently and transparently. Using this model, a concept is characterized by a list of associated concepts, together with weights that indicate the strength of the association. The associated concepts in the vectors and their weights are derived from a set of documents linked to the concept of interest. An important issue with this approach is the determination of the weights of the associated concepts. Various schemes have been proposed to determine these weights, but no comparative studies of the different approaches are available. Here we compare several weighting approaches in a large scale classification experiment. Three different techniques were evaluated: (1) weighting based on averaging, an empirical approach; (2) the log likelihood ratio, a test-based measure; (3) the uncertainty coefficient, an information-theory based measure. The weighting schemes were applied in a system that annotates genes with Gene Ontology codes. As the gold standard for our study we used the annotations provided by the Gene Ontology Annotation project. Classification performance was evaluated by means of the receiver operating characteristics (ROC) curve using the area under the curve (AUC) as the measure of performance. All methods performed well with median AUC scores greater than 0.84, and scored considerably higher than a binary approach without any weighting. Especially for the more specific Gene Ontology codes excellent performance was observed. The differences between the methods were small when considering the whole experiment. However, the number of documents that were linked to a concept proved to be an important variable. When larger amounts of texts were available for the generation of the concepts' vectors, the performance of the methods diverged considerably, with the uncertainty coefficient then outperforming the two other methods.

  17. Local effects of redundant terrestrial and GPS-based tie vectors in ITRF-like combinations

    NASA Astrophysics Data System (ADS)

    Abbondanza, Claudio; Altamimi, Zuheir; Sarti, Pierguido; Negusini, Monia; Vittuari, Luca

    2009-11-01

    Tie vectors (TVs) between co-located space geodetic instruments are essential for combining terrestrial reference frames (TRFs) realised using different techniques. They provide relative positioning between instrumental reference points (RPs) which are part of a global geodetic network such as the international terrestrial reference frame (ITRF). This paper gathers the set of very long baseline interferometry (VLBI)-global positioning system (GPS) local ties performed at the observatory of Medicina (Northern Italy) during the years 2001-2006 and discusses some important aspects related to the usage of co-location ties in the combinations of TRFs. Two measurement approaches of local survey are considered here: a GPS-based approach and a classical approach based on terrestrial observations (i.e. angles, distances and height differences). The behaviour of terrestrial local ties, which routinely join combinations of space geodetic solutions, is compared to that of GPS-based local ties. In particular, we have performed and analysed different combinations of satellite laser ranging (SLR), VLBI and GPS long term solutions in order to (i) evaluate the local effects of the insertion of the series of TVs computed at Medicina, (ii) investigate the consistency of GPS-based TVs with respect to space geodetic solutions, (iii) discuss the effects of an imprecise alignment of TVs from a local to a global reference frame. Results of ITRF-like combinations show that terrestrial TVs originate the smallest residuals in all the three components. In most cases, GPS-based TVs fit space geodetic solutions very well, especially in the horizontal components (N, E). On the contrary, the estimation of the VLBI RP Up component through GPS technique appears to be awkward, since the corresponding post fit residuals are considerably larger. Besides, combination tests including multi-temporal TVs display local effects of residual redistribution, when compared to those solutions where Medicina TVs are added one at a time. Finally, the combination of TRFs turns out to be sensitive to the orientation of the local tie into the global frame.

  18. Quantum and Classical OpticsEmerging Links

    DTIC Science & Technology

    2016-05-09

    apparatus, the Young interferometer. Implementation of vector-space control directed at challenges in polarimetry have been mentioned and a number of...28 361–74 [5] Ambiguous issues in standard approaches to polarimetry can be clarified by recognizing classical optical entanglement. See Simon B N...Degree of polarization for optical near fields Phys. Rev. E 66 016615 Ellis J and Dogariu A 2005 Optical polarimetry of random fields Phys. Rev. Lett

  19. The SAMEX Vector Magnetograph: A Design Study for a Space-Based Solar Vector Magnetograph

    NASA Technical Reports Server (NTRS)

    Hagyard, M. J.; Gary, G. A.; West, E. A.

    1988-01-01

    This report presents the results of a pre-phase A study performed by the Marshall Space Flight Center (MSFC) for the Air Force Geophysics Laboratory (AFGL) to develop a design concept for a space-based solar vector magnetograph and hydrogen-alpha telescope. These are two of the core instruments for a proposed Air Force mission, the Solar Activities Measurement Experiments (SAMEX). This mission is designed to study the processes which give rise to activity in the solar atmosphere and to develop techniques for predicting solar activity and its effects on the terrestrial environment.

  20. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    PubMed

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  1. Vectoring of parallel synthetic jets: A parametric study

    NASA Astrophysics Data System (ADS)

    Berk, Tim; Gomit, Guillaume; Ganapathisubramani, Bharathram

    2016-11-01

    The vectoring of a pair of parallel synthetic jets can be described using five dimensionless parameters: the aspect ratio of the slots, the Strouhal number, the Reynolds number, the phase difference between the jets and the spacing between the slots. In the present study, the influence of the latter four on the vectoring behaviour of the jets is examined experimentally using particle image velocimetry. Time-averaged velocity maps are used to study the variations in vectoring behaviour for a parametric sweep of each of the four parameters independently. A topological map is constructed for the full four-dimensional parameter space. The vectoring behaviour is described both qualitatively and quantitatively. A vectoring mechanism is proposed, based on measured vortex positions. We acknowledge the financial support from the European Research Council (ERC Grant Agreement No. 277472).

  2. A Code Generation Approach for Auto-Vectorization in the Spade Compiler

    NASA Astrophysics Data System (ADS)

    Wang, Huayong; Andrade, Henrique; Gedik, Buğra; Wu, Kun-Lung

    We describe an auto-vectorization approach for the Spade stream processing programming language, comprising two ideas. First, we provide support for vectors as a primitive data type. Second, we provide a C++ library with architecture-specific implementations of a large number of pre-vectorized operations as the means to support language extensions. We evaluate our approach with several stream processing operators, contrasting Spade's auto-vectorization with the native auto-vectorization provided by the GNU gcc and Intel icc compilers.

  3. The combined geodetic network adjusted on the reference ellipsoid - a comparison of three functional models for GNSS observations

    NASA Astrophysics Data System (ADS)

    Kadaj, Roman

    2016-12-01

    The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS observations.

  4. Combined-probability space and certainty or uncertainty relations for a finite-level quantum system

    NASA Astrophysics Data System (ADS)

    Sehrawat, Arun

    2017-08-01

    The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.

  5. Generation of elliptical and circular vector hollow beams with different polarizations by a Mach-Zehnder-type optical path

    NASA Astrophysics Data System (ADS)

    Wang, Zhizhang; Pei, Chunying; Xia, Meng; Yin, Yaling; Xia, Yong; Yin, Jianping

    2018-01-01

    We present an experimental approach to convert linearly polarized Gaussian beams into elliptical and circular vector hollow beams (VHBs) with different polarization states. The scheme employed is based on a Mach-Zehnder-type optical path combined with a reflective spatial light modulator (SLM) in each path. The resulting VHBs have radial, azimuthal, and other polarization states. Our studies also show that the size of the generated VHBs remains constant during the propagation in free space over a certain distance, and can be controlled by the axial ratio of the SLM’s binary phase plate. These studies deliver great optical parameters and hold promising applications in the fields of optical trapping and manipulation of particles.

  6. Flow of Combustion Products Containing Condensed-Phase Particles over a Recessed Vectorable Jet Nozzle

    NASA Astrophysics Data System (ADS)

    Volkov, K. N.; Denisikhin, S. V.; Emel'yanov, V. N.; Teterina, I. V.

    2017-09-01

    The flow of combustion products containing condensed-phase particles over the recessed vectorable nozzle of a solid-propellant rocket motor was investigated with the use of the Reynolds-averaged Navier-Stokes equations, equations of the k-ɛ model of turbulence, and the Lagrange approach. The fields of flows of combustion products and the mechanical trajectories of condensed-phase particles in the charge channel, the prenozzle space, and the nozzle unit of this motor were calculated for different angles of swing of the nozzle. The formation of vortices in the gas flow in the neighborhood of the downstream cover of the nozzle and their influence on the movement of particles different in size were considered.

  7. O Electromagnetic Power Waves and Power Density Components.

    NASA Astrophysics Data System (ADS)

    Petzold, Donald Wayne

    1980-12-01

    On January 10, 1884 Lord Rayleigh presented a paper entitled "On the Transfer of Energy in the Electromagnetic Field" to the Royal Society of London. This paper had been authored by the late Fellow of Trinity College, Cambridge, Professor J. H. Poynting and in it he claimed that there was a general law for the transfer of electromagnetic energy. He argued that associated with each point in space is a quantity, that has since been called the Poynting vector, that is a measure of the rate of energy flow per unit area. His analysis was concerned with the integration of this power density vector at all points over an enclosing surface of a specific volume. The interpretation of this Poynting vector as a true measure of the local power density was viewed with great skepticism unless the vector was integrated over a closed surface, as the development of the concept required. However, within the last decade or so Shadowitz indicates that a number of prominent authors have argued that the criticism of the interpretation of Poynting's vector as a local power density vector is unjustified. The present paper is not concerned with these arguments but instead is concerned with a decomposition of Poynting's power density vector into two and only two components: one vector which has the same direction as Poynting's vector and which is called the forward power density vector, and another vector, directed opposite to the Poynting vector and called the reverse power density vector. These new local forward and reverse power density vectors will be shown to be dependent upon forward and reverse power wave vectors and these vectors in turn will be related to newly defined forward and reverse components of the electric and magnetic fields. The sum of these forward and reverse power density vectors, which is simply the original Poynting vector, is associated with the total electromagnetic energy traveling past the local point. Another vector which is the difference between the forward and reverse power density vectors and which will be shown to be associated with the total electric and magnetic field energy densities existing at a local point will also be introduced. These local forward and reverse power density vectors may be integrated over a surface to determine the forward and reverse powers and from these results problems related to maximum power transfer or efficiency of electromagnetic energy transmission in space may be studied in a manner similar to that presently being done with transmission lines, wave guides, and more recently with two port multiport lumped parameter systems. These new forward and reverse power density vectors at a point in space are analogous to the forward and revoltages or currents and power waves as used with the transmission line, waveguide, or port. These power wave vectors in space are a generalization of the power waves as developed by Penfield, Youla, and Kurokawa and used with the scattering parameters associated with transmission lines, waveguides and ports.

  8. Human action classification using procrustes shape theory

    NASA Astrophysics Data System (ADS)

    Cho, Wanhyun; Kim, Sangkyoon; Park, Soonyoung; Lee, Myungeun

    2015-02-01

    In this paper, we propose new method that can classify a human action using Procrustes shape theory. First, we extract a pre-shape configuration vector of landmarks from each frame of an image sequence representing an arbitrary human action, and then we have derived the Procrustes fit vector for pre-shape configuration vector. Second, we extract a set of pre-shape vectors from tanning sample stored at database, and we compute a Procrustes mean shape vector for these preshape vectors. Third, we extract a sequence of the pre-shape vectors from input video, and we project this sequence of pre-shape vectors on the tangent space with respect to the pole taking as a sequence of mean shape vectors corresponding with a target video. And we calculate the Procrustes distance between two sequences of the projection pre-shape vectors on the tangent space and the mean shape vectors. Finally, we classify the input video into the human action class with minimum Procrustes distance. We assess a performance of the proposed method using one public dataset, namely Weizmann human action dataset. Experimental results reveal that the proposed method performs very good on this dataset.

  9. Vector control in developed countries

    PubMed Central

    Peters, Richard F.

    1963-01-01

    The recent rapid growth of California's population, leading to competition for space between residential, industrial and agricultural interests, the development of its water resources and increasing water pollution provide the basic ingredients of its present vector problems. Within the past half-century, the original mosquito habitats provided by nature have gradually given place to even more numerous and productive habitats of man-made character. At the same time, emphasis in mosquito control has shifted from physical to chemical, with the more recent extension to biological approaches as well. The growing domestic fly problem, continuing despite the virtual disappearance of the horse, is attributable to an increasing amount of organic by-products, stemming from growing communities, expanding industries and changing agriculture. The programme for the control of disease vectors and pest insects and animals directs its major effort to the following broad areas: (1) water management (including land preparation), (2) solid organic wastes management (emphasizing utilization), (3) community management (including design, layout, and storage practices of buildings and grounds), and (4) recreational area management (related to wildlife management). It is apparent that vector control can often employ economics as an ally in securing its objectives. Effective organization of the environment to produce maximum economic benefits to industry, agriculture, and the community results generally in conditions unfavourable to the survival of vector and noxious animal species. Hence, vector prevention or suppression is preferable to control as a programme objective. PMID:20604166

  10. Improvement of cardiac CT reconstruction using local motion vector fields.

    PubMed

    Schirra, Carsten Oliver; Bontus, Claas; van Stevendaal, Udo; Dössel, Olaf; Grass, Michael

    2009-03-01

    The motion of the heart is a major challenge for cardiac imaging using CT. A novel approach to decrease motion blur and to improve the signal to noise ratio is motion compensated reconstruction which takes motion vector fields into account in order to correct motion. The presented work deals with the determination of local motion vector fields from high contrast objects and their utilization within motion compensated filtered back projection reconstruction. Image registration is applied during the quiescent cardiac phases. Temporal interpolation in parameter space is used in order to estimate motion during strong motion phases. The resulting motion vector fields are during image reconstruction. The method is assessed using a software phantom and several clinical cases for calcium scoring. As a criterion for reconstruction quality, calcium volume scores were derived from both, gated cardiac reconstruction and motion compensated reconstruction throughout the cardiac phases using low pitch helical cone beam CT acquisitions. The presented technique is a robust method to determine and utilize local motion vector fields. Motion compensated reconstruction using the derived motion vector fields leads to superior image quality compared to gated reconstruction. As a result, the gating window can be enlarged significantly, resulting in increased SNR, while reliable Hounsfield units are achieved due to the reduced level of motion artefacts. The enlargement of the gating window can be translated into reduced dose requirements.

  11. Quantization and superselection sectors III: Multiply connected spaces and indistinguishable particles

    NASA Astrophysics Data System (ADS)

    Landsman, N. P. Klaas

    2016-09-01

    We reconsider the (non-relativistic) quantum theory of indistinguishable particles on the basis of Rieffel’s notion of C∗-algebraic (“strict”) deformation quantization. Using this formalism, we relate the operator approach of Messiah and Greenberg (1964) to the configuration space approach pioneered by Souriau (1967), Laidlaw and DeWitt-Morette (1971), Leinaas and Myrheim (1977), and others. In dimension d > 2, the former yields bosons, fermions, and paraparticles, whereas the latter seems to leave room for bosons and fermions only, apparently contradicting the operator approach as far as the admissibility of parastatistics is concerned. To resolve this, we first prove that in d > 2 the topologically non-trivial configuration spaces of the second approach are quantized by the algebras of observables of the first. Secondly, we show that the irreducible representations of the latter may be realized by vector bundle constructions, among which the line bundles recover the results of the second approach. Mathematically speaking, representations on higher-dimensional bundles (which define parastatistics) cannot be excluded, which render the configuration space approach incomplete. Physically, however, we show that the corresponding particle states may always be realized in terms of bosons and/or fermions with an unobserved internal degree of freedom (although based on non-relativistic quantum mechanics, this conclusion is analogous to the rigorous results of the Doplicher-Haag-Roberts analysis in algebraic quantum field theory, as well as to the heuristic arguments which led Gell-Mann and others to QCD (i.e. Quantum Chromodynamics)).

  12. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields

    PubMed Central

    Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian

    2017-01-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469

  13. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields.

    PubMed

    Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen

    2017-06-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.

  14. Interoperability Policy Roadmap

    DTIC Science & Technology

    2010-01-01

    Retrieval – SMART The technique developed by Dr. Gerard Salton for automated information retrieval and text analysis is called the vector-space... Salton , G., Wong, A., Yang, C.S., “A Vector Space Model for Automatic Indexing”, Commu- nications of the ACM, 18, 613-620. [10] Salton , G., McGill

  15. Application of Hyperspectal Techniques to Monitoring & Management of Invasive Plant Species Infestation

    DTIC Science & Technology

    2008-01-09

    The image data as acquired from the sensor is a data cloud in multi- dimensional space with each band generating an axis of dimension. When the data... The color of a material is defined by the direction of its unit vector in n- dimensional spectral space . The length of the vector relates only to how...to n- dimensional space . SAM determines the similarity

  16. Detecting Spatial Patterns of Natural Hazards from the Wikipedia Knowledge Base

    NASA Astrophysics Data System (ADS)

    Fan, J.; Stewart, K.

    2015-07-01

    The Wikipedia database is a data source of immense richness and variety. Included in this database are thousands of geotagged articles, including, for example, almost real-time updates on current and historic natural hazards. This includes usercontributed information about the location of natural hazards, the extent of the disasters, and many details relating to response, impact, and recovery. In this research, a computational framework is proposed to detect spatial patterns of natural hazards from the Wikipedia database by combining topic modeling methods with spatial analysis techniques. The computation is performed on the Neon Cluster, a high performance-computing cluster at the University of Iowa. This work uses wildfires as the exemplar hazard, but this framework is easily generalizable to other types of hazards, such as hurricanes or flooding. Latent Dirichlet Allocation (LDA) modeling is first employed to train the entire English Wikipedia dump, transforming the database dump into a 500-dimension topic model. Over 230,000 geo-tagged articles are then extracted from the Wikipedia database, spatially covering the contiguous United States. The geo-tagged articles are converted into an LDA topic space based on the topic model, with each article being represented as a weighted multidimension topic vector. By treating each article's topic vector as an observed point in geographic space, a probability surface is calculated for each of the topics. In this work, Wikipedia articles about wildfires are extracted from the Wikipedia database, forming a wildfire corpus and creating a basis for the topic vector analysis. The spatial distribution of wildfire outbreaks in the US is estimated by calculating the weighted sum of the topic probability surfaces using a map algebra approach, and mapped using GIS. To provide an evaluation of the approach, the estimation is compared to wildfire hazard potential maps created by the USDA Forest service.

  17. Production of transgenic Korean native cattle expressing enhanced green fluorescent protein using a FIV-based lentiviral vector injected into MII oocytes.

    PubMed

    Xu, Yong-Nan; Uhm, Sang-Jun; Koo, Bon-Chul; Kwon, Mo-Sun; Roh, Ji-Yeol; Yang, Jung-Seok; Choi, Hyun-Yong; Heo, Young-Tae; Cui, Xiang-Shun; Yoon, Joon-Ho; Ko, Dae-Hwan; Kim, Teoan; Kim, Nam-Hyung

    2013-01-20

    The potential benefits of generating and using transgenic cattle range from improvements in agriculture to the production of large quantities of pharmaceutically relevant proteins. Previous studies have attempted to produce transgenic cattle and other livestock by pronuclear injection and somatic cell nuclear transfer, but these approaches have been largely ineffective; however, a third approach, lentivirus-mediated transgenesis, has successfully produced transgenic livestock. In this study, we generated transgenic (TG) Korean native cattle using perivitelline space injection of viral vectors, which expressed enhanced green fluorescent protein (EGFP) systemically. Two different types of lentiviral vectors derived from feline immunodeficiency virus (FIV) and human immunodeficiency virus (HIV) carrying EGFP were injected into the perivitelline space of MII oocytes. EGFP expression at 8-cell stage was significantly higher in the FIV group compared to the HIV group (47.5%±2.2% v.s. 22.9%±2.9%). Eight-cell embryos that expressed EGFP were cultured into blastocysts and then transferred into 40 heifers. Ten heifers were successfully impregnated and delivered 10 healthy calves. All of these calves expressed EGFP as detected by in vivo imaging, PCR and Southern blotting. In addition, we established an EGFP-expressing cell line from TG calves, which was followed by nuclear transfer (NT). Recloned 8-cell embryos also expressed EGFP, and there were no differences in the rates of fusion, cleavage and development between cells derived from TG and non-TG calves, which were subsequently used for NT. These results illustrate that FIV-based lentiviruses are useful for the production of TG cattle. Moreover, our established EGFP cell line can be used for additional studies that involve induced pluripotent stem cells. Copyright © 2013. Published by Elsevier Ltd.

  18. Development of a NEW Vector Magnetograph at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    West, Edward; Hagyard, Mona; Gary, Allen; Smith, James; Adams, Mitzi; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    This paper will describe the Experimental Vector Magnetograph that has been developed at the Marshall Space Flight Center (MSFC). This instrument was designed to improve linear polarization measurements by replacing electro-optic and rotating waveplate modulators with a rotating linear analyzer. Our paper will describe the motivation for developing this magnetograph, compare this instrument with traditional magnetograph designs, and present a comparison of the data acquired by this instrument and original MSFC vector magnetograph.

  19. Implementation of a new fuzzy vector control of induction motor.

    PubMed

    Rafa, Souad; Larabi, Abdelkader; Barazane, Linda; Manceur, Malik; Essounbouli, Najib; Hamzaoui, Abdelaziz

    2014-05-01

    The aim of this paper is to present a new approach to control an induction motor using type-1 fuzzy logic. The induction motor has a nonlinear model, uncertain and strongly coupled. The vector control technique, which is based on the inverse model of the induction motors, solves the coupling problem. Unfortunately, in practice this is not checked because of model uncertainties. Indeed, the presence of the uncertainties led us to use human expertise such as the fuzzy logic techniques. In order to maintain the decoupling and to overcome the problem of the sensitivity to the parametric variations, the field-oriented control is replaced by a new block control. The simulation results show that the both control schemes provide in their basic configuration, comparable performances regarding the decoupling. However, the fuzzy vector control provides the insensitivity to the parametric variations compared to the classical one. The fuzzy vector control scheme is successfully implemented in real-time using a digital signal processor board dSPACE 1104. The efficiency of this technique is verified as well as experimentally at different dynamic operating conditions such as sudden loads change, parameter variations, speed changes, etc. The fuzzy vector control is found to be a best control for application in an induction motor. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. U(1)-invariant membranes: The geometric formulation, Abel, and pendulum differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheltukhin, A. A.; Fysikum, AlbaNova, Stockholm University, 106 91 Stockholm; NORDITA, Roslagstullsbacken 23, 106 91 Stockholm

    The geometric approach to study the dynamics of U(1)-invariant membranes is developed. The approach reveals an important role of the Abel nonlinear differential equation of the first type with variable coefficients depending on time and one of the membrane extendedness parameters. The general solution of the Abel equation is constructed. Exact solutions of the whole system of membrane equations in the D=5 Minkowski space-time are found and classified. It is shown that if the radial component of the membrane world vector is only time dependent, then the dynamics is described by the pendulum equation.

  1. Current Simulation Methods in Military Systems Vulnerability Assessment

    DTIC Science & Technology

    1990-11-01

    Weapons * 1990: JASON Review of the Army Approach to Vulnerability Testing Many of the suggestions and recommendations made by these committees concern...damage vectors. Ongoing work by the JASONs 29 is also targeted to developing statistical methods for LF-test/SQuASH-model comparisons in Space 2]. We...Technical Report BRL-TR-3113, June 1990. 28. L. Tonnessen, A. Fries , L. Starkey and A. Stein, Live Fire Testing in the Evaluation of the Vulnerability of

  2. Geometrization of quantum physics

    NASA Astrophysics Data System (ADS)

    Ol'Khov, O. A.

    2009-12-01

    It is shown that the Dirac equation for free particle can be considered as a description of specific distortion of the space euclidean geometry (space topological defect). This approach is based on possibility of interpretation of the wave function as vector realizing representation of the fundamental group of the closed topological space-time 4-manifold. Mass and spin appear to be topological invariants. Such concept explains all so called “strange” properties of quantum formalism: probabilities, wave-particle duality, nonlocal instantaneous correlation between noninteracting particles (EPR-paradox) and so on. Acceptance of suggested geometrical concept means rejection of atomistic concept where all matter is considered as consisting of more and more small elementary particles. There is no any particles a priori, before measurement: the notions of particles appear as a result of classical interpretation of the contact of the region of the curved space with a device.

  3. ATTITUDE FILTERING ON SO(3)

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    2005-01-01

    A new method is presented for the simultaneous estimation of the attitude of a spacecraft and an N-vector of bias parameters. This method uses a probability distribution function defined on the Cartesian product of SO(3), the group of rotation matrices, and the Euclidean space W N .The Fokker-Planck equation propagates the probability distribution function between measurements, and Bayes s formula incorporates measurement update information. This approach avoids all the issues of singular attitude representations or singular covariance matrices encountered in extended Kalman filters. In addition, the filter has a consistent initialization for a completely unknown initial attitude, owing to the fact that SO(3) is a compact space.

  4. Perceptual distortion analysis of color image VQ-based coding

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine

    1997-04-01

    It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.

  5. [Triatominae and Cactaceae: a risk for the transmission of the American trypanosomiasis in the peridomicilary space (Northeast Brazil)].

    PubMed

    Emperaire, L; Romaña, C A

    2006-06-01

    Field observations carried in semi-arid Brazil Northeast point out the frequent association, in the peridomiciliary space, between a cactus, Cereus jamacaru, the occurrence of nests in its branches and the occurrence of two species of insects vectors of Trypanosoma cruzi, pathogenic agent of Chagas disease: Rhodnius neglectus and Triatoma pseudomaculata. The analysis of the architectural variables of this Cactaceae shows that the presence of nests, and thus of insects, depends on the traditional practices of management of this cactus. This study underlines the relevance of an integrated approach of the ecology of Triatominae for the identification of factors of risk.

  6. Wigner functions for nonparaxial, arbitrarily polarized electromagnetic wave fields in free space.

    PubMed

    Alonso, Miguel A

    2004-11-01

    New representations are defined for describing electromagnetic wave fields in free space exactly in terms of rays for any wavelength, level of coherence or polarization, and numerical aperture, as long as there are no evanescent components. These representations correspond to tensors assigned to each ray such that the electric and magnetic energy densities, the Poynting vector, and the polarization properties of the field correspond to simple integrals involving these tensors for the rays that go through the specified point. For partially coherent fields, the ray-based approach provided by the new representations can reduce dramatically the computation times for the physical properties mentioned earlier.

  7. Eigenvalue approach to coupled thermoelasticity in a rotating isotropic medium

    NASA Astrophysics Data System (ADS)

    Bayones, F. S.; Abd-Alla, A. M.

    2018-03-01

    In this paper the linear theory of the thermoelasticity has been employed to study the effect of the rotation in a thermoelastic half-space containing heat source on the boundary of the half-space. It is assumed that the medium under consideration is traction free, homogeneous, isotropic, as well as without energy dissipation. The normal mode analysis has been applied in the basic equations of coupled thermoelasticity and finally the resulting equations are written in the form of a vector- matrix differential equation which is then solved by eigenvalue approach. Numerical results for the displacement components, stresses, and temperature are given and illustrated graphically. Comparison was made with the results obtained in the presence and absence of the rotation. The results indicate that the effect of rotation, non-dimensional thermal wave and time are very pronounced.

  8. Optical design of transmitter lens for asymmetric distributed free space optical networks

    NASA Astrophysics Data System (ADS)

    Wojtanowski, Jacek; Traczyk, Maciej

    2018-05-01

    We present a method of transmitter lens design dedicated for light distribution shaping on a curved and asymmetric target. In this context, target is understood as a surface determined by hypothetical optical detectors locations. In the proposed method, ribbon-like surfaces of arbitrary shape are considered. The designed lens has the task to transform collimated and generally non-uniform input beam into desired irradiance distribution on such irregular targets. Desired irradiance is associated with space-dependant efficiency of power flow between the source and receivers distributed on the target surface. This unconventional nonimaging task is different from most illumination or beam shaping objectives, where constant or prescribed irradiance has to be produced on a flat target screen. The discussed optical challenge comes from the applications where single transmitter cooperates with multitude of receivers located in various positions in space and oriented in various directions. The proposed approach is not limited to optical networks, but can be applied in a variety of other applications where nonconventional irradiance distribution has to be engineered. The described method of lens design is based on geometrical optics, radiometry and ray mapping philosophy. Rays are processed as a vector field, each of them carrying a certain amount of power. Having the target surface shape and orientation of receivers distribution, the rays-surface crossings map is calculated. It corresponds to the output rays vector field, which is referred to the calculated input rays spatial distribution on the designed optical surface. The application of Snell's law in a vector form allows one to obtain surface local normal vector and calculate lens profile. In the paper, we also present the case study dealing with exemplary optical network. The designed freeform lens is implemented in commercially available optical design software and irradiance three-dimensional spatial distribution is examined, showing perfect agreement with expectations.

  9. Representation of magnetic fields in space

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1975-01-01

    Several methods by which a magnetic field in space can be represented are reviewed with particular attention to problems of the observed geomagnetic field. Time dependence is assumed to be negligible, and five main classes of representation are described by vector potential, scalar potential, orthogonal vectors, Euler potentials, and expanded magnetic field.

  10. Knowledge Space: A Conceptual Basis for the Organization of Knowledge

    ERIC Educational Resources Information Center

    Meincke, Peter P. M.; Atherton, Pauline

    1976-01-01

    Proposes a new conceptual basis for visualizing the organization of information, or knowledge, which differentiates between the concept "vectors" for a field of knowledge represented in a multidimensional space, and the state "vectors" for a person based on his understanding of these concepts, and the representational…

  11. Color TV: total variation methods for restoration of vector-valued images.

    PubMed

    Blomgren, P; Chan, T F

    1998-01-01

    We propose a new definition of the total variation (TV) norm for vector-valued functions that can be applied to restore color and other vector-valued images. The new TV norm has the desirable properties of 1) not penalizing discontinuities (edges) in the image, 2) being rotationally invariant in the image space, and 3) reducing to the usual TV norm in the scalar case. Some numerical experiments on denoising simple color images in red-green-blue (RGB) color space are presented.

  12. Planning collision free paths for two cooperating robots using a divide-and-conquer C-space traversal heuristic

    NASA Technical Reports Server (NTRS)

    Weaver, Johnathan M.

    1993-01-01

    A method was developed to plan feasible and obstacle-avoiding paths for two spatial robots working cooperatively in a known static environment. Cooperating spatial robots as referred to herein are robots which work in 6D task space while simultaneously grasping and manipulating a common, rigid payload. The approach is configuration space (c-space) based and performs selective rather than exhaustive c-space mapping. No expensive precomputations are required. A novel, divide-and-conquer type of heuristic is used to guide the selective mapping process. The heuristic does not involve any robot, environment, or task specific assumptions. A technique was also developed which enables solution of the cooperating redundant robot path planning problem without requiring the use of inverse kinematics for a redundant robot. The path planning strategy involves first attempting to traverse along the configuration space vector from the start point towards the goal point. If an unsafe region is encountered, an intermediate via point is identified by conducting a systematic search in the hyperplane orthogonal to and bisecting the unsafe region of the vector. This process is repeatedly applied until a solution to the global path planning problem is obtained. The basic concept behind this strategy is that better local decisions at the beginning of the trouble region may be made if a possible way around the 'center' of the trouble region is known. Thus, rather than attempting paths which look promising locally (at the beginning of a trouble region) but which may not yield overall results, the heuristic attempts local strategies that appear promising for circumventing the unsafe region.

  13. Vectors and Rotations in 3-Dimensions: Vector Algebra for the C++ Programmer

    DTIC Science & Technology

    2016-12-01

    Proving Ground, MD 21005-5068 This report describes 2 C++ classes: a Vector class for performing vector algebra in 3-dimensional space ( 3D ) and a Rotation...class for performing rotations of vectors in 3D . Each class is self-contained in a single header file (Vector.h and Rotation.h) so that a C...vector, rotation, 3D , quaternion, C++ tools, rotation sequence, Euler angles, yaw, pitch, roll, orientation 98 Richard Saucier 410-278-6721Unclassified

  14. Pushing Memory Bandwidth Limitations Through Efficient Implementations of Block-Krylov Space Solvers on GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, M. A.; Strelchenko, Alexei; Vaquero, Alejandro

    Lattice quantum chromodynamics simulations in nuclear physics have benefited from a tremendous number of algorithmic advances such as multigrid and eigenvector deflation. These improve the time to solution but do not alleviate the intrinsic memory-bandwidth constraints of the matrix-vector operation dominating iterative solvers. Batching this operation for multiple vectors and exploiting cache and register blocking can yield a super-linear speed up. Block-Krylov solvers can naturally take advantage of such batched matrix-vector operations, further reducing the iterations to solution by sharing the Krylov space between solves. However, practical implementations typically suffer from the quadratic scaling in the number of vector-vector operations.more » Using the QUDA library, we present an implementation of a block-CG solver on NVIDIA GPUs which reduces the memory-bandwidth complexity of vector-vector operations from quadratic to linear. We present results for the HISQ discretization, showing a 5x speedup compared to highly-optimized independent Krylov solves on NVIDIA's SaturnV cluster.« less

  15. Observation of Polarization Vortices in Momentum Space

    NASA Astrophysics Data System (ADS)

    Zhang, Yiwen; Chen, Ang; Liu, Wenzhe; Hsu, Chia Wei; Wang, Bo; Guan, Fang; Liu, Xiaohan; Shi, Lei; Lu, Ling; Zi, Jian

    2018-05-01

    The vortex, a fundamental topological excitation featuring the in-plane winding of a vector field, is important in various areas such as fluid dynamics, liquid crystals, and superconductors. Although commonly existing in nature, vortices were observed exclusively in real space. Here, we experimentally observed momentum-space vortices as the winding of far-field polarization vectors in the first Brillouin zone of periodic plasmonic structures. Using homemade polarization-resolved momentum-space imaging spectroscopy, we mapped out the dispersion, lifetime, and polarization of all radiative states at the visible wavelengths. The momentum-space vortices were experimentally identified by their winding patterns in the polarization-resolved isofrequency contours and their diverging radiative quality factors. Such polarization vortices can exist robustly on any periodic systems of vectorial fields, while they are not captured by the existing topological band theory developed for scalar fields. Our work provides a new way for designing high-Q plasmonic resonances, generating vector beams, and studying topological photonics in the momentum space.

  16. Observation of Polarization Vortices in Momentum Space.

    PubMed

    Zhang, Yiwen; Chen, Ang; Liu, Wenzhe; Hsu, Chia Wei; Wang, Bo; Guan, Fang; Liu, Xiaohan; Shi, Lei; Lu, Ling; Zi, Jian

    2018-05-04

    The vortex, a fundamental topological excitation featuring the in-plane winding of a vector field, is important in various areas such as fluid dynamics, liquid crystals, and superconductors. Although commonly existing in nature, vortices were observed exclusively in real space. Here, we experimentally observed momentum-space vortices as the winding of far-field polarization vectors in the first Brillouin zone of periodic plasmonic structures. Using homemade polarization-resolved momentum-space imaging spectroscopy, we mapped out the dispersion, lifetime, and polarization of all radiative states at the visible wavelengths. The momentum-space vortices were experimentally identified by their winding patterns in the polarization-resolved isofrequency contours and their diverging radiative quality factors. Such polarization vortices can exist robustly on any periodic systems of vectorial fields, while they are not captured by the existing topological band theory developed for scalar fields. Our work provides a new way for designing high-Q plasmonic resonances, generating vector beams, and studying topological photonics in the momentum space.

  17. Adenoviral Vector Immunity: Its Implications and circumvention strategies

    PubMed Central

    Ahi, Yadvinder S.; Bangari, Dinesh S.; Mittal, Suresh K.

    2014-01-01

    Adenoviral (Ad) vectors have emerged as a promising gene delivery platform for a variety of therapeutic and vaccine purposes during last two decades. However, the presence of preexisting Ad immunity and the rapid development of Ad vector immunity still pose significant challenges to the clinical use of these vectors. Innate inflammatory response following Ad vector administration may lead to systemic toxicity, drastically limit vector transduction efficiency and significantly abbreviate the duration of transgene expression. Currently, a number of approaches are being extensively pursued to overcome these drawbacks by strategies that target either the host or the Ad vector. In addition, significant progress has been made in the development of novel Ad vectors based on less prevalent human Ad serotypes and nonhuman Ad. This review provides an update on our current understanding of immune responses to Ad vectors and delineates various approaches for eluding Ad vector immunity. Approaches targeting the host and those targeting the vector are discussed in light of their promises and limitations. PMID:21453277

  18. Analysis of structural response data using discrete modal filters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.

    1991-01-01

    The application of reciprocal modal vectors to the analysis of structural response data is described. Reciprocal modal vectors are constructed using an existing experimental modal model and an existing frequency response matrix of a structure, and can be assembled into a matrix that effectively transforms the data from the physical space to a modal space within a particular frequency range. In other words, the weighting matrix necessary for modal vector orthogonality (typically the mass matrix) is contained within the reciprocal model matrix. The underlying goal of this work is mostly directed toward observing the modal state responses in the presence of unknown, possibly closed loop forcing functions, thus having an impact on both operating data analysis techniques and independent modal space control techniques. This study investigates the behavior of reciprocol modal vectors as modal filters with respect to certain calculation parameters and their performance with perturbed system frequency response data.

  19. Topic detection using paragraph vectors to support active learning in systematic reviews.

    PubMed

    Hashimoto, Kazuma; Kontonatsios, Georgios; Miwa, Makoto; Ananiadou, Sophia

    2016-08-01

    Systematic reviews require expert reviewers to manually screen thousands of citations in order to identify all relevant articles to the review. Active learning text classification is a supervised machine learning approach that has been shown to significantly reduce the manual annotation workload by semi-automating the citation screening process of systematic reviews. In this paper, we present a new topic detection method that induces an informative representation of studies, to improve the performance of the underlying active learner. Our proposed topic detection method uses a neural network-based vector space model to capture semantic similarities between documents. We firstly represent documents within the vector space, and cluster the documents into a predefined number of clusters. The centroids of the clusters are treated as latent topics. We then represent each document as a mixture of latent topics. For evaluation purposes, we employ the active learning strategy using both our novel topic detection method and a baseline topic model (i.e., Latent Dirichlet Allocation). Results obtained demonstrate that our method is able to achieve a high sensitivity of eligible studies and a significantly reduced manual annotation cost when compared to the baseline method. This observation is consistent across two clinical and three public health reviews. The tool introduced in this work is available from https://nactem.ac.uk/pvtopic/. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Multiband tangent space mapping and feature selection for classification of EEG during motor imagery.

    PubMed

    Islam, Md Rabiul; Tanaka, Toshihisa; Molla, Md Khademul Islam

    2018-05-08

    When designing multiclass motor imagery-based brain-computer interface (MI-BCI), a so-called tangent space mapping (TSM) method utilizing the geometric structure of covariance matrices is an effective technique. This paper aims to introduce a method using TSM for finding accurate operational frequency bands related brain activities associated with MI tasks. A multichannel electroencephalogram (EEG) signal is decomposed into multiple subbands, and tangent features are then estimated on each subband. A mutual information analysis-based effective algorithm is implemented to select subbands containing features capable of improving motor imagery classification accuracy. Thus obtained features of selected subbands are combined to get feature space. A principal component analysis-based approach is employed to reduce the features dimension and then the classification is accomplished by a support vector machine (SVM). Offline analysis demonstrates the proposed multiband tangent space mapping with subband selection (MTSMS) approach outperforms state-of-the-art methods. It acheives the highest average classification accuracy for all datasets (BCI competition dataset 2a, IIIa, IIIb, and dataset JK-HH1). The increased classification accuracy of MI tasks with the proposed MTSMS approach can yield effective implementation of BCI. The mutual information-based subband selection method is implemented to tune operation frequency bands to represent actual motor imagery tasks.

  1. Modeling Musical Context With Word2Vec

    NASA Astrophysics Data System (ADS)

    Herremans, Dorien; Chuan, Ching-Hua

    2017-05-01

    We present a semantic vector space model for capturing complex polyphonic musical context. A word2vec model based on a skip-gram representation with negative sampling was used to model slices of music from a dataset of Beethoven's piano sonatas. A visualization of the reduced vector space using t-distributed stochastic neighbor embedding shows that the resulting embedded vector space captures tonal relationships, even without any explicit information about the musical contents of the slices. Secondly, an excerpt of the Moonlight Sonata from Beethoven was altered by replacing slices based on context similarity. The resulting music shows that the selected slice based on similar word2vec context also has a relatively short tonal distance from the original slice.

  2. Direct discriminant locality preserving projection with Hammerstein polynomial expansion.

    PubMed

    Chen, Xi; Zhang, Jiashu; Li, Defang

    2012-12-01

    Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.

  3. A regularized approach for geodesic-based semisupervised multimanifold learning.

    PubMed

    Fan, Mingyu; Zhang, Xiaoqin; Lin, Zhouchen; Zhang, Zhongfei; Bao, Hujun

    2014-05-01

    Geodesic distance, as an essential measurement for data dissimilarity, has been successfully used in manifold learning. However, most geodesic distance-based manifold learning algorithms have two limitations when applied to classification: 1) class information is rarely used in computing the geodesic distances between data points on manifolds and 2) little attention has been paid to building an explicit dimension reduction mapping for extracting the discriminative information hidden in the geodesic distances. In this paper, we regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed. The method consists of three techniques: a semisupervised graph construction method, replacement of original data points with feature vectors which are built by geodesic distances, and a new semisupervised dimension reduction method for feature vectors. Experiments on the MNIST, USPS handwritten digit data sets, MIT CBCL face versus nonface data set, and an intelligent traffic data set show the effectiveness of the proposed algorithm.

  4. A vector scanning processing technique for pulsed laser velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Edwards, Robert V.

    1989-01-01

    Pulsed laser sheet velocimetry yields nonintrusive measurements of two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high precision (1 pct) velocity estimates, but can require several hours of processing time on specialized array processors. Under some circumstances, a simple, fast, less accurate (approx. 5 pct), data reduction technique which also gives unambiguous velocity vector information is acceptable. A direct space domain processing technique was examined. The direct space domain processing technique was found to be far superior to any other techniques known, in achieving the objectives listed above. It employs a new data coding and reduction technique, where the particle time history information is used directly. Further, it has no 180 deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 minutes on an 80386 based PC, producing a 2-D velocity vector map of the flow field. Hence, using this new space domain vector scanning (VS) technique, pulsed laser velocimetry data can be reduced quickly and reasonably accurately, without specialized array processing hardware.

  5. A Generic multi-dimensional feature extraction method using multiobjective genetic programming.

    PubMed

    Zhang, Yang; Rockett, Peter I

    2009-01-01

    In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.

  6. Biomedical image representation approach using visualness and spatial information in a concept feature space for interactive region-of-interest-based retrieval.

    PubMed

    Rahman, Md Mahmudur; Antani, Sameer K; Demner-Fushman, Dina; Thoma, George R

    2015-10-01

    This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term "concept" refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature.

  7. Biomedical image representation approach using visualness and spatial information in a concept feature space for interactive region-of-interest-based retrieval

    PubMed Central

    Rahman, Md. Mahmudur; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.

    2015-01-01

    Abstract. This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term “concept” refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature. PMID:26730398

  8. Topological charge quantization via path integration: An application of the Kustaanheimo-Stiefel transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inomata, A.; Junker, G.; Wilson, R.

    1993-08-01

    The unified treatment of the Dirac monopole, the Schwinger monopole, and the Aharonov-Bahn problem by Barut and Wilson is revisited via a path integral approach. The Kustaanheimo-Stiefel transformation of space and time is utilized to calculate the path integral for a charged particle in the singular vector potential. In the process of dimensional reduction, a topological charge quantization rule is derived, which contains Dirac's quantization condition as a special case. 32 refs.

  9. Geometric Representations of Condition Queries on Three-Dimensional Vector Fields

    NASA Technical Reports Server (NTRS)

    Henze, Chris

    1999-01-01

    Condition queries on distributed data ask where particular conditions are satisfied. It is possible to represent condition queries as geometric objects by plotting field data in various spaces derived from the data, and by selecting loci within these derived spaces which signify the desired conditions. Rather simple geometric partitions of derived spaces can represent complex condition queries because much complexity can be encapsulated in the derived space mapping itself A geometric view of condition queries provides a useful conceptual unification, allowing one to intuitively understand many existing vector field feature detection algorithms -- and to design new ones -- as variations on a common theme. A geometric representation of condition queries also provides a simple and coherent basis for computer implementation, reducing a wide variety of existing and potential vector field feature detection techniques to a few simple geometric operations.

  10. A note on φ-analytic conformal vector fields

    NASA Astrophysics Data System (ADS)

    Deshmukh, Sharief; Bin Turki, Nasser

    2017-09-01

    Taking clue from the analytic vector fields on a complex manifold, φ-analytic conformal vector fields are defined on a Riemannian manifold (Deshmukh and Al-Solamy in Colloq. Math. 112(1):157-161, 2008). In this paper, we use φ-analytic conformal vector fields to find new characterizations of the n-sphere Sn(c) and the Euclidean space (Rn,<,> ).

  11. An Alternative to the Gauge Theoretic Setting

    NASA Astrophysics Data System (ADS)

    Schroer, Bert

    2011-10-01

    The standard formulation of quantum gauge theories results from the Lagrangian (functional integral) quantization of classical gauge theories. A more intrinsic quantum theoretical access in the spirit of Wigner's representation theory shows that there is a fundamental clash between the pointlike localization of zero mass (vector, tensor) potentials and the Hilbert space (positivity, unitarity) structure of QT. The quantization approach has no other way than to stay with pointlike localization and sacrifice the Hilbert space whereas the approach built on the intrinsic quantum concept of modular localization keeps the Hilbert space and trades the conflict creating pointlike generation with the tightest consistent localization: semiinfinite spacelike string localization. Whereas these potentials in the presence of interactions stay quite close to associated pointlike field strengths, the interacting matter fields to which they are coupled bear the brunt of the nonlocal aspect in that they are string-generated in a way which cannot be undone by any differentiation. The new stringlike approach to gauge theory also revives the idea of a Schwinger-Higgs screening mechanism as a deeper and less metaphoric description of the Higgs spontaneous symmetry breaking and its accompanying tale about "God's particle" and its mass generation for all the other particles.

  12. Inclusive τ lepton hadronic decay in vector and axial-vector channels within dispersive approach to QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nesterenko, A. V.

    The dispersive approach to QCD, which properly embodies the intrinsically nonperturbative constraints originating in the kinematic restrictions on relevant physical processes and extends the applicability range of perturbation theory towards the infrared domain, is briefly overviewed. The study of OPAL (update 2012) and ALEPH (update 2014) experimental data on inclusive τ lepton hadronic decay in vector and axial-vector channels within dispersive approach is presented.

  13. Detecting dynamical boundaries from kinematic data in biomechanics

    NASA Astrophysics Data System (ADS)

    Ross, Shane D.; Tanaka, Martin L.; Senatore, Carmine

    2010-03-01

    Ridges in the state space distribution of finite-time Lyapunov exponents can be used to locate dynamical boundaries. We describe a method for obtaining dynamical boundaries using only trajectories reconstructed from time series, expanding on the current approach which requires a vector field in the phase space. We analyze problems in musculoskeletal biomechanics, considered as exemplars of a class of experimental systems that contain separatrix features. Particular focus is given to postural control and balance, considering both models and experimental data. Our success in determining the boundary between recovery and failure in human balance activities suggests this approach will provide new robust stability measures, as well as measures of fall risk, that currently are not available and may have benefits for the analysis and prevention of low back pain and falls leading to injury, both of which affect a significant portion of the population.

  14. A variable structure approach to robust control of VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Kramer, F.

    1982-01-01

    This paper examines the application of variable structure control theory to the design of a flight control system for the AV-8A Harrier in a hover mode. The objective in variable structure design is to confine the motion to a subspace of the total state space. The motion in this subspace is insensitive to system parameter variations and external disturbances that lie in the range space of the control. A switching type of control law results from the design procedure. The control system was designed to track a vector velocity command defined in the body frame. For comparison purposes, a proportional controller was designed using optimal linear regulator theory. Both control designs were first evaluated for transient response performance using a linearized model, then a nonlinear simulation study of a hovering approach to landing was conducted. Wind turbulence was modeled using a 1052 destroyer class air wake model.

  15. Exploratory Model Analysis of the Space Based Infrared System (SBIRS) Low Global Scheduler Problem

    DTIC Science & Technology

    1999-12-01

    solution. The non- linear least squares model is defined as Y = f{e,t) where: 0 =M-element parameter vector Y =N-element vector of all data t...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS EXPLORATORY MODEL ANALYSIS OF THE SPACE BASED INFRARED SYSTEM (SBIRS) LOW GLOBAL SCHEDULER...December 1999 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE EXPLORATORY MODEL ANALYSIS OF THE SPACE BASED INFRARED SYSTEM

  16. Modeling and parameter identification of impulse response matrix of mechanical systems

    NASA Astrophysics Data System (ADS)

    Bordatchev, Evgueni V.

    1998-12-01

    A method for studying the problem of modeling, identification and analysis of mechanical system dynamic characteristic in view of the impulse response matrix for the purpose of adaptive control is developed here. Two types of the impulse response matrices are considered: (i) on displacement, which describes the space-coupled relationship between vectors of the force and simulated displacement, which describes the space-coupled relationship between vectors of the force and simulated displacement and (ii) on acceleration, which also describes the space-coupled relationship between the vectors of the force and measured acceleration. The idea of identification consists of: (a) the practical obtaining of the impulse response matrix on acceleration by 'impact-response' technique; (b) the modeling and parameter estimation of the each impulse response function on acceleration through the fundamental representation of the impulse response function on displacement as a sum of the damped sine curves applying linear and non-linear least square methods; (c) simulating the impulse provides the additional possibility to calculate masses, damper and spring constants. The damped natural frequencies are used as a priori information and are found through the standard FFT analysis. The problem of double numerical integration is avoided by taking two derivations of the fundamental dynamic model of a mechanical system as linear combination of the mass-damper-spring subsystems. The identified impulse response matrix on displacement represents the dynamic properties of the mechanical system. From the engineering point of view, this matrix can be also understood as a 'dynamic passport' of the mechanical system and can be used for dynamic certification and analysis of the dynamic quality. In addition, the suggested approach mathematically reproduces amplitude-frequency response matrix in a low-frequency band and on zero frequency. This allows the possibility of determining the matrix of the static stiffness due to dynamic testing over the time of 10- 15 minutes. As a practical example, the dynamic properties in view of the impulse and frequency response matrices of the lathe spindle are obtained, identified and investigated. The developed approach for modeling and parameter identification appears promising for a wide range o industrial applications; for example, rotary systems.

  17. A comparison of linear approaches to filter out environmental effects in structural health monitoring

    NASA Astrophysics Data System (ADS)

    Deraemaeker, A.; Worden, K.

    2018-05-01

    This paper discusses the possibility of using the Mahalanobis squared-distance to perform robust novelty detection in the presence of important environmental variability in a multivariate feature vector. By performing an eigenvalue decomposition of the covariance matrix used to compute that distance, it is shown that the Mahalanobis squared-distance can be written as the sum of independent terms which result from a transformation from the feature vector space to a space of independent variables. In general, especially when the size of the features vector is large, there are dominant eigenvalues and eigenvectors associated with the covariance matrix, so that a set of principal components can be defined. Because the associated eigenvalues are high, their contribution to the Mahalanobis squared-distance is low, while the contribution of the other components is high due to the low value of the associated eigenvalues. This analysis shows that the Mahalanobis distance naturally filters out the variability in the training data. This property can be used to remove the effect of the environment in damage detection, in much the same way as two other established techniques, principal component analysis and factor analysis. The three techniques are compared here using real experimental data from a wooden bridge for which the feature vector consists in eigenfrequencies and modeshapes collected under changing environmental conditions, as well as damaged conditions simulated with an added mass. The results confirm the similarity between the three techniques and the ability to filter out environmental effects, while keeping a high sensitivity to structural changes. The results also show that even after filtering out the environmental effects, the normality assumption cannot be made for the residual feature vector. An alternative is demonstrated here based on extreme value statistics which results in a much better threshold which avoids false positives in the training data, while allowing detection of all damaged cases.

  18. Using latent semantic analysis and the predication algorithm to improve extraction of meanings from a diagnostic corpus.

    PubMed

    Jorge-Botana, Guillermo; Olmos, Ricardo; León, José Antonio

    2009-11-01

    There is currently a widespread interest in indexing and extracting taxonomic information from large text collections. An example is the automatic categorization of informally written medical or psychological diagnoses, followed by the extraction of epidemiological information or even terms and structures needed to formulate guiding questions as an heuristic tool for helping doctors. Vector space models have been successfully used to this end (Lee, Cimino, Zhu, Sable, Shanker, Ely & Yu, 2006; Pakhomov, Buntrock & Chute, 2006). In this study we use a computational model known as Latent Semantic Analysis (LSA) on a diagnostic corpus with the aim of retrieving definitions (in the form of lists of semantic neighbors) of common structures it contains (e.g. "storm phobia", "dog phobia") or less common structures that might be formed by logical combinations of categories and diagnostic symptoms (e.g. "gun personality" or "germ personality"). In the quest to bring definitions into line with the meaning of structures and make them in some way representative, various problems commonly arise while recovering content using vector space models. We propose some approaches which bypass these problems, such as Kintsch's (2001) predication algorithm and some corrections to the way lists of neighbors are obtained, which have already been tested on semantic spaces in a non-specific domain (Jorge-Botana, León, Olmos & Hassan-Montero, under review). The results support the idea that the predication algorithm may also be useful for extracting more precise meanings of certain structures from scientific corpora, and that the introduction of some corrections based on vector length may increases its efficiency on non-representative terms.

  19. An Elementary Treatment of General Inner Products

    ERIC Educational Resources Information Center

    Graver, Jack E.

    2011-01-01

    A typical first course on linear algebra is usually restricted to vector spaces over the real numbers and the usual positive-definite inner product. Hence, the proof that dim(S)+ dim(S[perpendicular]) = dim("V") is not presented in a way that is generalizable to non-positive?definite inner products or to vector spaces over other fields. In this…

  20. Spatial pattern evolution of Aedes aegypti breeding sites in an Argentinean city without a dengue vector control programme.

    PubMed

    Espinosa, Manuel O; Polop, Francisco; Rotela, Camilo H; Abril, Marcelo; Scavuzzo, Carlos M

    2016-11-21

    The main objective of this study was to obtain and analyse the space-time dynamics of Aedes aegypti breeding sites in Clorinda City, Formosa Province, Argentina coupled with landscape analysis using the maximum entropy approach in order to generate a dengue vector niche model. In urban areas, without vector control activities, 12 entomologic (larval) samplings were performed during three years (October 2011 to October 2014). The entomologic surveillance area represented 16,511 houses. Predictive models for Aedes distribution were developed using vector breeding abundance data, density analysis, clustering and geoprocessing techniques coupled with Earth observation satellite data. The spatial analysis showed a vector spatial distribution pattern with clusters of high density in the central region of Clorinda with a well-defined high-risk area in the western part of the city. It also showed a differential temporal behaviour among different areas, which could have implications for risk models and control strategies at the urban scale. The niche model obtained for Ae. aegypti, based on only one year of field data, showed that 85.8% of the distribution of breeding sites is explained by the percentage of water supply (48.2%), urban distribution (33.2%), and the percentage of urban coverage (4.4%). The consequences for the development of control strategies are discussed with reference to the results obtained using distribution maps based on environmental variables.

  1. Predicting disulfide connectivity from protein sequence using multiple sequence feature vectors and secondary structure.

    PubMed

    Song, Jiangning; Yuan, Zheng; Tan, Hao; Huber, Thomas; Burrage, Kevin

    2007-12-01

    Disulfide bonds are primary covalent crosslinks between two cysteine residues in proteins that play critical roles in stabilizing the protein structures and are commonly found in extracy-toplasmatic or secreted proteins. In protein folding prediction, the localization of disulfide bonds can greatly reduce the search in conformational space. Therefore, there is a great need to develop computational methods capable of accurately predicting disulfide connectivity patterns in proteins that could have potentially important applications. We have developed a novel method to predict disulfide connectivity patterns from protein primary sequence, using a support vector regression (SVR) approach based on multiple sequence feature vectors and predicted secondary structure by the PSIPRED program. The results indicate that our method could achieve a prediction accuracy of 74.4% and 77.9%, respectively, when averaged on proteins with two to five disulfide bridges using 4-fold cross-validation, measured on the protein and cysteine pair on a well-defined non-homologous dataset. We assessed the effects of different sequence encoding schemes on the prediction performance of disulfide connectivity. It has been shown that the sequence encoding scheme based on multiple sequence feature vectors coupled with predicted secondary structure can significantly improve the prediction accuracy, thus enabling our method to outperform most of other currently available predictors. Our work provides a complementary approach to the current algorithms that should be useful in computationally assigning disulfide connectivity patterns and helps in the annotation of protein sequences generated by large-scale whole-genome projects. The prediction web server and Supplementary Material are accessible at http://foo.maths.uq.edu.au/~huber/disulfide

  2. Wigner functions on non-standard symplectic vector spaces

    NASA Astrophysics Data System (ADS)

    Dias, Nuno Costa; Prata, João Nuno

    2018-01-01

    We consider the Weyl quantization on a flat non-standard symplectic vector space. We focus mainly on the properties of the Wigner functions defined therein. In particular we show that the sets of Wigner functions on distinct symplectic spaces are different but have non-empty intersections. This extends previous results to arbitrary dimension and arbitrary (constant) symplectic structure. As a by-product we introduce and prove several concepts and results on non-standard symplectic spaces which generalize those on the standard symplectic space, namely, the symplectic spectrum, Williamson's theorem, and Narcowich-Wigner spectra. We also show how Wigner functions on non-standard symplectic spaces behave under the action of an arbitrary linear coordinate transformation.

  3. Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets.

    PubMed

    Demartines, P; Herault, J

    1997-01-01

    We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.

  4. Gamow-Teller response in the configuration space of a density-functional-theory-rooted no-core configuration-interaction model

    NASA Astrophysics Data System (ADS)

    Konieczka, M.; Kortelainen, M.; Satuła, W.

    2018-03-01

    Background: The atomic nucleus is a unique laboratory in which to study fundamental aspects of the electroweak interaction. This includes a question concerning in medium renormalization of the axial-vector current, which still lacks satisfactory explanation. Study of spin-isospin or Gamow-Teller (GT) response may provide valuable information on both the quenching of the axial-vector coupling constant as well as on nuclear structure and nuclear astrophysics. Purpose: We have performed a seminal calculation of the GT response by using the no-core configuration-interaction approach rooted in multireference density functional theory (DFT-NCCI). The model treats properly isospin and rotational symmetries and can be applied to calculate both the nuclear spectra and transition rates in atomic nuclei, irrespectively of their mass and particle-number parity. Methods: The DFT-NCCI calculation proceeds as follows: First, one builds a configuration space by computing relevant, for a given physical problem, (multi)particle-(multi)hole Slater determinants. Next, one applies the isospin and angular-momentum projections and performs the isospin and K mixing in order to construct a model space composed of linearly dependent states of good angular momentum. Eventually, one mixes the projected states by solving the Hill-Wheeler-Griffin equation. Results: The method is applied to compute the GT strength distribution in selected N ≈Z nuclei including the p -shell 8Li and 8Be nuclei and the s d -shell well-deformed nucleus 24Mg. In order to demonstrate a flexibility of the approach we present also a calculation of the superallowed GT β decay in doubly-magic spherical 100Sn and the low-spin spectrum in 100In. Conclusions: It is demonstrated that the DFT-NCCI model is capable of capturing the GT response satisfactorily well by using a relatively small configuration space, exhausting simultaneously the GT sum rule. The model, due to its flexibility and broad range of applicability, may either serve as a complement or even as an alternative to other theoretical approaches, including the conventional nuclear shell model.

  5. Dynamic analysis of suspension cable based on vector form intrinsic finite element method

    NASA Astrophysics Data System (ADS)

    Qin, Jian; Qiao, Liang; Wan, Jiancheng; Jiang, Ming; Xia, Yongjun

    2017-10-01

    A vector finite element method is presented for the dynamic analysis of cable structures based on the vector form intrinsic finite element (VFIFE) and mechanical properties of suspension cable. Firstly, the suspension cable is discretized into different elements by space points, the mass and external forces of suspension cable are transformed into space points. The structural form of cable is described by the space points at different time. The equations of motion for the space points are established according to the Newton’s second law. Then, the element internal forces between the space points are derived from the flexible truss structure. Finally, the motion equations of space points are solved by the central difference method with reasonable time integration step. The tangential tension of the bearing rope in a test ropeway with the moving concentrated loads is calculated and compared with the experimental data. The results show that the tangential tension of suspension cable with moving loads is consistent with the experimental data. This method has high calculated precision and meets the requirements of engineering application.

  6. Computational Performance of Intel MIC, Sandy Bridge, and GPU Architectures: Implementation of a 1D c++/OpenMP Electrostatic Particle-In-Cell Code

    DTIC Science & Technology

    2014-05-01

    fusion, space and astrophysical plasmas, but still the general picture can be presented quite well with the fluid approach [6, 7]. The microscopic...purpose computing CPU for algorithms where processing of large blocks of data is done in parallel. The reason for that is the GPU’s highly effective...parallel structure. Most of the image and video processing computations involve heavy matrix and vector op- erations over large amounts of data and

  7. The Current State and TRL Assessment of People Tracking Technology for Video Surveillance Applications

    DTIC Science & Technology

    2014-09-01

    the feature-space used to represent the target. Sometimes we trade off keeping information about one domain of the target in exchange for robustness... Kullback - Leibler distance), can be used as a similarity function between a candidate target and a template. This approach is invariant to changes in scale...basis vectors to adapt to appearance change and learns the visual information that the set of targets have in common, which is used to reduce the

  8. Complex network construction based on user group attention sequence

    NASA Astrophysics Data System (ADS)

    Zhang, Gaowei; Xu, Lingyu; Wang, Lei

    2018-04-01

    In the traditional complex network construction, it is often to use the similarity between nodes, build the weight of the network, and finally build the network. However, this approach tends to focus only on the coupling between nodes, while ignoring the information transfer between nodes and the transfer of directionality. In the network public opinion space, based on the set of stock series that the network groups pay attention to within a certain period of time, we vectorize the different stocks and build a complex network.

  9. Vector magnetic fields in sunspots. I - Stokes profile analysis using the Marshall Space Flight Center magnetograph

    NASA Technical Reports Server (NTRS)

    Balasubramaniam, K. S.; West, E. A.

    1991-01-01

    The Marshall Space Flight Center (MSFC) vector magnetograph is a tunable filter magnetograph with a bandpass of 125 mA. Results are presented of the inversion of Stokes polarization profiles observed with the MSFC vector magnetograph centered on a sunspot to recover the vector magnetic field parameters and thermodynamic parameters of the spectral line forming region using the Fe I 5250.2 A spectral line using a nonlinear least-squares fitting technique. As a preliminary investigation, it is also shown that the recovered thermodynamic parameters could be better understood if the fitted parameters like Doppler width, opacity ratio, and damping constant were broken down into more basic quantities like temperature, microturbulent velocity, or density parameter.

  10. Characteristic classes of gauge systems

    NASA Astrophysics Data System (ADS)

    Lyakhovich, S. L.; Sharapov, A. A.

    2004-12-01

    We define and study invariants which can be uniformly constructed for any gauge system. By a gauge system we understand an (anti-)Poisson supermanifold provided with an odd Hamiltonian self-commuting vector field called a homological vector field. This definition encompasses all the cases usually included into the notion of a gauge theory in physics as well as some other similar (but different) structures like Lie or Courant algebroids. For Lagrangian gauge theories or Hamiltonian first class constrained systems, the homological vector field is identified with the classical BRST transformation operator. We define characteristic classes of a gauge system as universal cohomology classes of the homological vector field, which are uniformly constructed in terms of this vector field itself. Not striving to exhaustively classify all the characteristic classes in this work, we compute those invariants which are built up in terms of the first derivatives of the homological vector field. We also consider the cohomological operations in the space of all the characteristic classes. In particular, we show that the (anti-)Poisson bracket becomes trivial when applied to the space of all the characteristic classes, instead the latter space can be endowed with another Lie bracket operation. Making use of this Lie bracket one can generate new characteristic classes involving higher derivatives of the homological vector field. The simplest characteristic classes are illustrated by the examples relating them to anomalies in the traditional BV or BFV-BRST theory and to characteristic classes of (singular) foliations.

  11. Subspace-based interference removal methods for a multichannel biomagnetic sensor array

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Nagarajan, Srikantan S.

    2017-10-01

    Objective. In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. Approach. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.

  12. Unidirectional Wave Vector Manipulation in Two-Dimensional Space with an All Passive Acoustic Parity-Time-Symmetric Metamaterials Crystal

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Zhu, Xuefeng; Chen, Fei; Liang, Shanjun; Zhu, Jie

    2018-03-01

    Exploring the concept of non-Hermitian Hamiltonians respecting parity-time symmetry with classical wave systems is of great interest as it enables the experimental investigation of parity-time-symmetric systems through the quantum-classical analogue. Here, we demonstrate unidirectional wave vector manipulation in two-dimensional space, with an all passive acoustic parity-time-symmetric metamaterials crystal. The metamaterials crystal is constructed through interleaving groove- and holey-structured acoustic metamaterials to provide an intrinsic parity-time-symmetric potential that is two-dimensionally extended and curved, which allows the flexible manipulation of unpaired wave vectors. At the transition point from the unbroken to broken parity-time symmetry phase, the unidirectional sound focusing effect (along with reflectionless acoustic transparency in the opposite direction) is experimentally realized over the spectrum. This demonstration confirms the capability of passive acoustic systems to carry the experimental studies on general parity-time symmetry physics and further reveals the unique functionalities enabled by the judiciously tailored unidirectional wave vectors in space.

  13. On Anholonomic Deformation, Geometry, and Differentiation

    DTIC Science & Technology

    2013-02-01

    αβχ are not necessarily Levi - Civita connection coefficients). The vector cross product × obeys, for two vectors V and W and two covectors α and β , V...three-dimensional space. 2.2.5. Euclidean space. Let GAB(X ) = GA · GB be the metric tensor of the space. The Levi - Civita connection coefficients of GAB...curvature tensor of the Levi - Civita connection vanishes identically: G R A BCD = 2 ( ∂[B G A C]D + G A[B|E|G EC]D ) = 0. (43) In n

  14. Differential Calculus on h-Deformed Spaces

    NASA Astrophysics Data System (ADS)

    Herlemont, Basile; Ogievetsky, Oleg

    2017-10-01

    We construct the rings of generalized differential operators on the h-deformed vector space of gl-type. In contrast to the q-deformed vector space, where the ring of differential operators is unique up to an isomorphism, the general ring of h-deformed differential operators {Diff}_{h},σ(n) is labeled by a rational function σ in n variables, satisfying an over-determined system of finite-difference equations. We obtain the general solution of the system and describe some properties of the rings {Diff}_{h},σ(n).

  15. Support vector machine based decision for mechanical fault condition monitoring in induction motor using an advanced Hilbert-Park transform.

    PubMed

    Ben Salem, Samira; Bacha, Khmais; Chaari, Abdelkader

    2012-09-01

    In this work we suggest an original fault signature based on an improved combination of Hilbert and Park transforms. Starting from this combination we can create two fault signatures: Hilbert modulus current space vector (HMCSV) and Hilbert phase current space vector (HPCSV). These two fault signatures are subsequently analysed using the classical fast Fourier transform (FFT). The effects of mechanical faults on the HMCSV and HPCSV spectrums are described, and the related frequencies are determined. The magnitudes of spectral components, relative to the studied faults (air-gap eccentricity and outer raceway ball bearing defect), are extracted in order to develop the input vector necessary for learning and testing the support vector machine with an aim of classifying automatically the various states of the induction motor. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Evolution of Lamb Vector as a Vortex Breaking into Turbulence.

    NASA Astrophysics Data System (ADS)

    Wu, J. Z.; Lu, X. Y.

    1996-11-01

    In an incompressible flow, either laminar or turbulent, the Lamb vector is solely responsible to nonlinear interactions. While its longitudinal part is balanced by stagnation enthalpy, its transverse part is the unique source (as an external forcing in spectral space) that causes the flow to evolve. Moreover, in Reynolds-averaged flows the turbulent force can be derived exclusively from the Lamb vector instead of the full Reynolds stress tensor. Therefore, studying the evolution of the Lamb vector itself (both longitudinal and transverse parts) is of great interest. We have numerically examined this problem, taking the nonlinear distabilization of a viscous vortex as an example. In the later stage of this evolution we introduced a forcing to keep a statistically steady state, and observed the Lamb vector behavior in the resulting fine turbulence. The result is presented in both physical and spectral spaces.

  17. Optoelectronic Inner-Product Neural Associative Memory

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1993-01-01

    Optoelectronic apparatus acts as artificial neural network performing associative recall of binary images. Recall process is iterative one involving optical computation of inner products between binary input vector and one or more reference binary vectors in memory. Inner-product method requires far less memory space than matrix-vector method.

  18. Enhancing membrane protein subcellular localization prediction by parallel fusion of multi-view features.

    PubMed

    Yu, Dongjun; Wu, Xiaowei; Shen, Hongbin; Yang, Jian; Tang, Zhenmin; Qi, Yong; Yang, Jingyu

    2012-12-01

    Membrane proteins are encoded by ~ 30% in the genome and function importantly in the living organisms. Previous studies have revealed that membrane proteins' structures and functions show obvious cell organelle-specific properties. Hence, it is highly desired to predict membrane protein's subcellular location from the primary sequence considering the extreme difficulties of membrane protein wet-lab studies. Although many models have been developed for predicting protein subcellular locations, only a few are specific to membrane proteins. Existing prediction approaches were constructed based on statistical machine learning algorithms with serial combination of multi-view features, i.e., different feature vectors are simply serially combined to form a super feature vector. However, such simple combination of features will simultaneously increase the information redundancy that could, in turn, deteriorate the final prediction accuracy. That's why it was often found that prediction success rates in the serial super space were even lower than those in a single-view space. The purpose of this paper is investigation of a proper method for fusing multiple multi-view protein sequential features for subcellular location predictions. Instead of serial strategy, we propose a novel parallel framework for fusing multiple membrane protein multi-view attributes that will represent protein samples in complex spaces. We also proposed generalized principle component analysis (GPCA) for feature reduction purpose in the complex geometry. All the experimental results through different machine learning algorithms on benchmark membrane protein subcellular localization datasets demonstrate that the newly proposed parallel strategy outperforms the traditional serial approach. We also demonstrate the efficacy of the parallel strategy on a soluble protein subcellular localization dataset indicating the parallel technique is flexible to suite for other computational biology problems. The software and datasets are available at: http://www.csbio.sjtu.edu.cn/bioinf/mpsp.

  19. Disease Prediction based on Functional Connectomes using a Scalable and Spatially-Informed Support Vector Machine

    PubMed Central

    Watanabe, Takanori; Kessler, Daniel; Scott, Clayton; Angstadt, Michael; Sripada, Chandra

    2014-01-01

    Substantial evidence indicates that major psychiatric disorders are associated with distributed neural dysconnectivity, leading to strong interest in using neuroimaging methods to accurately predict disorder status. In this work, we are specifically interested in a multivariate approach that uses features derived from whole-brain resting state functional connectomes. However, functional connectomes reside in a high dimensional space, which complicates model interpretation and introduces numerous statistical and computational challenges. Traditional feature selection techniques are used to reduce data dimensionality, but are blind to the spatial structure of the connectomes. We propose a regularization framework where the 6-D structure of the functional connectome (defined by pairs of points in 3-D space) is explicitly taken into account via the fused Lasso or the GraphNet regularizer. Our method only restricts the loss function to be convex and margin-based, allowing non-differentiable loss functions such as the hinge-loss to be used. Using the fused Lasso or GraphNet regularizer with the hinge-loss leads to a structured sparse support vector machine (SVM) with embedded feature selection. We introduce a novel efficient optimization algorithm based on the augmented Lagrangian and the classical alternating direction method, which can solve both fused Lasso and GraphNet regularized SVM with very little modification. We also demonstrate that the inner subproblems of the algorithm can be solved efficiently in analytic form by coupling the variable splitting strategy with a data augmentation scheme. Experiments on simulated data and resting state scans from a large schizophrenia dataset show that our proposed approach can identify predictive regions that are spatially contiguous in the 6-D “connectome space,” offering an additional layer of interpretability that could provide new insights about various disease processes. PMID:24704268

  20. Human pose tracking from monocular video by traversing an image motion mapped body pose manifold

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2010-01-01

    Tracking human pose from monocular video sequences is a challenging problem due to the large number of independent parameters affecting image appearance and nonlinear relationships between generating parameters and the resultant images. Unlike the current practice of fitting interpolation functions to point correspondences between underlying pose parameters and image appearance, we exploit the relationship between pose parameters and image motion flow vectors in a physically meaningful way. Change in image appearance due to pose change is realized as navigating a low dimensional submanifold of the infinite dimensional Lie group of diffeomorphisms of the two dimensional sphere S2. For small changes in pose, image motion flow vectors lie on the tangent space of the submanifold. Any observed image motion flow vector field is decomposed into the basis motion vector flow fields on the tangent space and combination weights are used to update corresponding pose changes in the different dimensions of the pose parameter space. Image motion flow vectors are largely invariant to style changes in experiments with synthetic and real data where the subjects exhibit variation in appearance and clothing. The experiments demonstrate the robustness of our method (within +/-4° of ground truth) to style variance.

  1. Real-time object-to-features vectorisation via Siamese neural networks

    NASA Astrophysics Data System (ADS)

    Fedorenko, Fedor; Usilin, Sergey

    2017-03-01

    Object-to-features vectorisation is a hard problem to solve for objects that can be hard to distinguish. Siamese and Triplet neural networks are one of the more recent tools used for such task. However, most networks used are very deep networks that prove to be hard to compute in the Internet of Things setting. In this paper, a computationally efficient neural network is proposed for real-time object-to-features vectorisation into a Euclidean metric space. We use L2 distance to reflect feature vector similarity during both training and testing. In this way, feature vectors we develop can be easily classified using K-Nearest Neighbours classifier. Such approach can be used to train networks to vectorise such "problematic" objects like images of human faces, keypoint image patches, like keypoints on Arctic maps and surrounding marine areas.

  2. The problem of exact interior solutions for rotating rigid bodies in general relativity

    NASA Technical Reports Server (NTRS)

    Wahlquist, H. D.

    1993-01-01

    The (3 + 1) dyadic formalism for timelike congruences is applied to derive interior solutions for stationary, axisymmetric, rigidly rotating bodies. In this approach the mathematics is formulated in terms of three-space-covariant, first-order, vector-dyadic, differential equations for a and Omega, the acceleration and angular velocity three-vectors of the rigid body; for T, the stress dyadic of the matter; and for A and B, the 'electric' and 'magnetic' Weyl curvature dyadics which describe the gravitational field. It is shown how an appropriate ansatz for the forms of these dyadics can be used to discover exact rotating interior solutions such as the perfect fluid solution first published in 1968. By incorporating anisotropic stresses, a generalization is found of that previous solution and, in addition, a very simple new solution that can only exist in toroidal configurations.

  3. A Support Vector Machine-Based Gender Identification Using Speech Signal

    NASA Astrophysics Data System (ADS)

    Lee, Kye-Hwan; Kang, Sang-Ick; Kim, Deok-Hwan; Chang, Joon-Hyuk

    We propose an effective voice-based gender identification method using a support vector machine (SVM). The SVM is a binary classification algorithm that classifies two groups by finding the voluntary nonlinear boundary in a feature space and is known to yield high classification performance. In the present work, we compare the identification performance of the SVM with that of a Gaussian mixture model (GMM)-based method using the mel frequency cepstral coefficients (MFCC). A novel approach of incorporating a features fusion scheme based on a combination of the MFCC and the fundamental frequency is proposed with the aim of improving the performance of gender identification. Experimental results demonstrate that the gender identification performance using the SVM is significantly better than that of the GMM-based scheme. Moreover, the performance is substantially improved when the proposed features fusion technique is applied.

  4. Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe

    A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less

  5. Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations

    DOE PAGES

    Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; ...

    2017-11-14

    A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less

  6. Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations

    NASA Astrophysics Data System (ADS)

    Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; Gagliardi, Laura; de Jong, Wibe A.

    2017-11-01

    A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals for the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. The chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.

  7. A unified development of several techniques for the representation of random vectors and data sets

    NASA Technical Reports Server (NTRS)

    Bundick, W. T.

    1973-01-01

    Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.

  8. An Improved Wavefront Control Algorithm for Large Space Telescopes

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Basinger, Scott A.; Redding, David C.

    2008-01-01

    Wavefront sensing and control is required throughout the mission lifecycle of large space telescopes such as James Webb Space Telescope (JWST). When an optic of such a telescope is controlled with both surface-deforming and rigid-body actuators, the sensitivity-matrix obtained from the exit pupil wavefront vector divided by the corresponding actuator command value can sometimes become singular due to difference in actuator types and in actuator command values. In this paper, we propose a simple approach for preventing a sensitivity-matrix from singularity. We also introduce a new "minimum-wavefront and optimal control compensator". It uses an optimal control gain matrix obtained by feeding back the actuator commands along with the measured or estimated wavefront phase information to the estimator, thus eliminating the actuator modes that are not observable in the wavefront sensing process.

  9. On the Improvement of Convergence Performance for Integrated Design of Wind Turbine Blade Using a Vector Dominating Multi-objective Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, L.; Wang, T. G.; Wu, J. H.; Cheng, G. P.

    2016-09-01

    A novel multi-objective optimization algorithm incorporating evolution strategies and vector mechanisms, referred as VD-MOEA, is proposed and applied in aerodynamic- structural integrated design of wind turbine blade. In the algorithm, a set of uniformly distributed vectors is constructed to guide population in moving forward to the Pareto front rapidly and maintain population diversity with high efficiency. For example, two- and three- objective designs of 1.5MW wind turbine blade are subsequently carried out for the optimization objectives of maximum annual energy production, minimum blade mass, and minimum extreme root thrust. The results show that the Pareto optimal solutions can be obtained in one single simulation run and uniformly distributed in the objective space, maximally maintaining the population diversity. In comparison to conventional evolution algorithms, VD-MOEA displays dramatic improvement of algorithm performance in both convergence and diversity preservation for handling complex problems of multi-variables, multi-objectives and multi-constraints. This provides a reliable high-performance optimization approach for the aerodynamic-structural integrated design of wind turbine blade.

  10. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    DOE PAGES

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less

  11. Generalized Analysis Tools for Multi-Spacecraft Missions

    NASA Astrophysics Data System (ADS)

    Chanteur, G. M.

    2011-12-01

    Analysis tools for multi-spacecraft missions like CLUSTER or MMS have been designed since the end of the 90's to estimate gradients of fields or to characterize discontinuities crossed by a cluster of spacecraft. Different approaches have been presented and discussed in the book "Analysis Methods for Multi-Spacecraft Data" published as Scientific Report 001 of the International Space Science Institute in Bern, Switzerland (G. Paschmann and P. Daly Eds., 1998). On one hand the approach using methods of least squares has the advantage to apply to any number of spacecraft [1] but is not convenient to perform analytical computation especially when considering the error analysis. On the other hand the barycentric approach is powerful as it provides simple analytical formulas involving the reciprocal vectors of the tetrahedron [2] but appears limited to clusters of four spacecraft. Moreover the barycentric approach allows to derive theoretical formulas for errors affecting the estimators built from the reciprocal vectors [2,3,4]. Following a first generalization of reciprocal vectors proposed by Vogt et al [4] and despite the present lack of projects with more than four spacecraft we present generalized reciprocal vectors for a cluster made of any number of spacecraft : each spacecraft is given a positive or nul weight. The non-coplanarity of at least four spacecraft with strictly positive weights is a necessary and sufficient condition for this analysis to be enabled. Weights given to spacecraft allow to minimize the influence of some spacecraft if its location or the quality of its data are not appropriate, or simply to extract subsets of spacecraft from the cluster. Estimators presented in [2] are generalized within this new frame except for the error analysis which is still under investigation. References [1] Harvey, C. C.: Spatial Gradients and the Volumetric Tensor, in: Analysis Methods for Multi-Spacecraft Data, G. Paschmann and P. Daly (eds.), pp. 307-322, ISSI SR-001, 1998. [2] Chanteur, G.: Spatial Interpolation for Four Spacecraft: Theory, in: Analysis Methods for Multi-Spacecraft Data, G. Paschmann and P. Daly (eds.), pp. 371-393, ISSI SR-001, 1998. [3] Chanteur, G.: Accuracy of field gradient estimations by Cluster: Explanation of its dependency upon elongation and planarity of the tetrahedron, pp. 265-268, ESA SP-449, 2000. [4] Vogt, J., Paschmann, G., and Chanteur, G.: Reciprocal Vectors, pp. 33-46, ISSI SR-008, 2008.

  12. Killing approximation for vacuum and thermal stress-energy tensor in static space-times

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frolov, V.P.; Zel'nikov, A.I.

    1987-05-15

    The problem of the vacuum polarization of conformal massless fields in static space-times is considered. A tensor T/sub ..mu..//sub ..nu../ constructed from the curvature, the Killing vector, and their covariant derivatives is proposed which can be used to approximate the average value of the stress-energy tensor /sup ren/ in such spaces. It is shown that if (i) its trace T /sub epsilon//sup epsilon/ coincides with the trace anomaly /sup ren/, (ii) it satisfies the conservation law T/sup ..mu..//sup epsilon/ /sub ;//sub epsilon/ = 0, and (iii) it has the correct behavior under the scale transformations, then it is uniquely definedmore » up to a few arbitrary constants. These constants must be chosen to satisfy the boundary conditions. In the case of a static black hole in a vacuum these conditions single out the unique tensor T/sub ..mu..//sub ..nu../ which provides a good approximation for /sup ren/ in the Hartle-Hawking vacuum. The relation between this approach and the Page-Brown-Ottewill approach is discussed.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seljak, Uroš; McDonald, Patrick, E-mail: useljak@berkeley.edu, E-mail: pvmcdonald@lbl.gov

    We develop a phase space distribution function approach to redshift space distortions (RSD), in which the redshift space density can be written as a sum over velocity moments of the distribution function. These moments are density weighted and have well defined physical interpretation: their lowest orders are density, momentum density, and stress energy density. The series expansion is convergent if kμu/aH < 1, where k is the wavevector, H the Hubble parameter, u the typical gravitational velocity and μ = cos θ, with θ being the angle between the Fourier mode and the line of sight. We perform an expansionmore » of these velocity moments into helicity modes, which are eigenmodes under rotation around the axis of Fourier mode direction, generalizing the scalar, vector, tensor decomposition of perturbations to an arbitrary order. We show that only equal helicity moments correlate and derive the angular dependence of the individual contributions to the redshift space power spectrum. We show that the dominant term of μ{sup 2} dependence on large scales is the cross-correlation between the density and scalar part of momentum density, which can be related to the time derivative of the matter power spectrum. Additional terms contributing to μ{sup 2} and dominating on small scales are the vector part of momentum density-momentum density correlations, the energy density-density correlations, and the scalar part of anisotropic stress density-density correlations. The second term is what is usually associated with the small scale Fingers-of-God damping and always suppresses power, but the first term comes with the opposite sign and always adds power. Similarly, we identify 7 terms contributing to μ{sup 4} dependence. Some of the advantages of the distribution function approach are that the series expansion converges on large scales and remains valid in multi-stream situations. We finish with a brief discussion of implications for RSD in galaxies relative to dark matter, highlighting the issue of scale dependent bias of velocity moments correlators.« less

  14. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  15. Computational model of a vector-mediated epidemic

    NASA Astrophysics Data System (ADS)

    Dickman, Adriana Gomes; Dickman, Ronald

    2015-05-01

    We discuss a lattice model of vector-mediated transmission of a disease to illustrate how simulations can be applied in epidemiology. The population consists of two species, human hosts and vectors, which contract the disease from one another. Hosts are sedentary, while vectors (mosquitoes) diffuse in space. Examples of such diseases are malaria, dengue fever, and Pierce's disease in vineyards. The model exhibits a phase transition between an absorbing (infection free) phase and an active one as parameters such as infection rates and vector density are varied.

  16. Modeling Interferometric Structures with Birefringent Elements: A Linear Vector-Space Formalism

    DTIC Science & Technology

    2013-11-12

    Annapolis, Maryland ViNceNt J. Urick FraNk BUcholtz Photonics Technology Branch Optical Sciences Division i REPORT DOCUMENTATION PAGE Form...a Linear Vector-Space Formalism Nicholas J. Frigo,1 Vincent J. Urick , and Frank Bucholtz Naval Research Laboratory, Code 5650 4555 Overlook Avenue, SW...Annapolis, MD Unclassified Unlimited Unclassified Unlimited Unclassified Unlimited Unclassified Unlimited 29 Vincent J. Urick (202) 767-9352 Coupled mode

  17. On the n-symplectic structure of faithful irreducible representations

    NASA Astrophysics Data System (ADS)

    Norris, L. K.

    2017-04-01

    Each faithful irreducible representation of an N-dimensional vector space V1 on an n-dimensional vector space V2 is shown to define a unique irreducible n-symplectic structure on the product manifold V1×V2 . The basic details of the associated Poisson algebra are developed for the special case N = n2, and 2n-dimensional symplectic submanifolds are shown to exist.

  18. A phenomenological calculus of Wiener description space.

    PubMed

    Richardson, I W; Louie, A H

    2007-10-01

    The phenomenological calculus is a categorical example of Robert Rosen's modeling relation. This paper is an alligation of the phenomenological calculus and generalized harmonic analysis, another categorical example. Our epistemological exploration continues into the realm of Wiener description space, in which constitutive parameters are extended from vectors to vector-valued functions of a real variable. Inherent in the phenomenology are fundamental representations of time and nearness to equilibrium.

  19. GNSS Space-Time Interference Mitigation and Attitude Determination in the Presence of Interference Signals

    PubMed Central

    Daneshmand, Saeed; Jahromi, Ali Jafarnia; Broumandan, Ali; Lachapelle, Gérard

    2015-01-01

    The use of Space-Time Processing (STP) in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its effectiveness for both narrowband and wideband interference suppression. However, the resulting distortion and bias on the cross correlation functions due to space-time filtering is a major limitation of this technique. Employing the steering vector of the GNSS signals in the filter structure can significantly reduce the distortion on cross correlation functions and lead to more accurate pseudorange measurements. This paper proposes a two-stage interference mitigation approach in which the first stage estimates an interference-free subspace before the acquisition and tracking phases and projects all received signals into this subspace. The next stage estimates array attitude parameters based on detecting and employing GNSS signals that are less distorted due to the projection process. Attitude parameters enable the receiver to estimate the steering vector of each satellite signal and use it in the novel distortionless STP filter to significantly reduce distortion and maximize Signal-to-Noise Ratio (SNR). GPS signals were collected using a six-element antenna array under open sky conditions to first calibrate the antenna array. Simulated interfering signals were then added to the digitized samples in software to verify the applicability of the proposed receiver structure and assess its performance for several interference scenarios. PMID:26016909

  20. GNSS space-time interference mitigation and attitude determination in the presence of interference signals.

    PubMed

    Daneshmand, Saeed; Jahromi, Ali Jafarnia; Broumandan, Ali; Lachapelle, Gérard

    2015-05-26

    The use of Space-Time Processing (STP) in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its effectiveness for both narrowband and wideband interference suppression. However, the resulting distortion and bias on the cross correlation functions due to space-time filtering is a major limitation of this technique. Employing the steering vector of the GNSS signals in the filter structure can significantly reduce the distortion on cross correlation functions and lead to more accurate pseudorange measurements. This paper proposes a two-stage interference mitigation approach in which the first stage estimates an interference-free subspace before the acquisition and tracking phases and projects all received signals into this subspace. The next stage estimates array attitude parameters based on detecting and employing GNSS signals that are less distorted due to the projection process. Attitude parameters enable the receiver to estimate the steering vector of each satellite signal and use it in the novel distortionless STP filter to significantly reduce distortion and maximize Signal-to-Noise Ratio (SNR). GPS signals were collected using a six-element antenna array under open sky conditions to first calibrate the antenna array. Simulated interfering signals were then added to the digitized samples in software to verify the applicability of the proposed receiver structure and assess its performance for several interference scenarios.

  1. Mining patterns in persistent surveillance systems with smart query and visual analytics

    NASA Astrophysics Data System (ADS)

    Habibi, Mohammad S.; Shirkhodaie, Amir

    2013-05-01

    In Persistent Surveillance Systems (PSS) the ability to detect and characterize events geospatially help take pre-emptive steps to counter adversary's actions. Interactive Visual Analytic (VA) model offers this platform for pattern investigation and reasoning to comprehend and/or predict such occurrences. The need for identifying and offsetting these threats requires collecting information from diverse sources, which brings with it increasingly abstract data. These abstract semantic data have a degree of inherent uncertainty and imprecision, and require a method for their filtration before being processed further. In this paper, we have introduced an approach based on Vector Space Modeling (VSM) technique for classification of spatiotemporal sequential patterns of group activities. The feature vectors consist of an array of attributes extracted from generated sensors semantic annotated messages. To facilitate proper similarity matching and detection of time-varying spatiotemporal patterns, a Temporal-Dynamic Time Warping (DTW) method with Gaussian Mixture Model (GMM) for Expectation Maximization (EM) is introduced. DTW is intended for detection of event patterns from neighborhood-proximity semantic frames derived from established ontology. GMM with EM, on the other hand, is employed as a Bayesian probabilistic model to estimated probability of events associated with a detected spatiotemporal pattern. In this paper, we present a new visual analytic tool for testing and evaluation group activities detected under this control scheme. Experimental results demonstrate the effectiveness of proposed approach for discovery and matching of subsequences within sequentially generated patterns space of our experiments.

  2. A unified approach to χ2 discriminators for searches of gravitational waves from compact binary coalescences

    NASA Astrophysics Data System (ADS)

    Dhurandhar, Sanjeev; Gupta, Anuradha; Gadre, Bhooshan; Bose, Sukanta

    2017-11-01

    We describe a general mathematical framework for χ2 discriminators in the context of the compact binary coalescence (CBC) search. We show that with any χ2 is associated a vector bundle over the signal manifold, that is, the manifold traced out by the signal waveforms in the function space of data segments. The χ2 is then defined as the square of the L2 norm of the data vector projected onto a finite-dimensional subspace (the fibre) of the Hilbert space of data trains and orthogonal to the signal waveform. Any such fibre leads to a χ2 discriminator, and the full vector bundle comprising the subspaces and the base manifold constitute the χ2 discriminator. We show that the χ2 discriminators used so far in the CBC searches correspond to different fibre structures constituting different vector bundles on the same base manifold, namely, the parameter space. Several benefits accrue from this general formulation. It most importantly shows that there are a plethora of χ2's available and further gives useful insights into the vetoing procedure. It indicates procedures to formulate new χ2's that could be more effective in discriminating against commonly occurring glitches in the data. It also shows that no χ2 with a reasonable number of degrees of freedom is foolproof. It could also shed light on understanding why the traditional χ2 works so well. We show how to construct a generic χ2 given an arbitrary set of vectors in the function space of data segments. These vectors could be chosen such that glitches have maximum projection on them. Further, for glitches that can be modeled, we are able to quantify the efficiency of a given χ2 discriminator by a probability. Second, we propose a family of ambiguity χ2 discriminators that is an alternative to the traditional one [B. Allen, Phys. Rev. D 71, 062001 (2005), 10.1103/PhysRevD.71.062001, B. Allen et al., Phys. Rev. D 85, 122006 (2012)., 10.1103/PhysRevD.85.122006]. Any such ambiguity χ2 makes use of the filtered output of the template bank, thus adding negligible cost to the overall search. It is termed so because it makes significant use of the ambiguity function. We first describe the formulation with the help of the Newtonian waveform, apply the ambiguity χ2 to the spinless TaylorF2 waveforms, and test it on simulated data. We show that the ambiguity χ2 essentially gives a clean separation between glitches and signals. We indicate how the ambiguity χ2 can be generalized to detector networks for coherent observations. The effects of mismatch between signal and templates on a χ2 discriminator using general arguments and the geometrical framework are also investigated.

  3. The infinitesimal operator for the semigroup of the Frobenius-Perron operator from image sequence data: vector fields and transport barriers from movies.

    PubMed

    Santitissadeekorn, N; Bollt, E M

    2007-06-01

    In this paper, we present an approach to approximate the Frobenius-Perron transfer operator from a sequence of time-ordered images, that is, a movie dataset. Unlike time-series data, successive images do not provide a direct access to a trajectory of a point in a phase space; more precisely, a pixel in an image plane. Therefore, we reconstruct the velocity field from image sequences based on the infinitesimal generator of the Frobenius-Perron operator. Moreover, we relate this problem to the well-known optical flow problem from the computer vision community and we validate the continuity equation derived from the infinitesimal operator as a constraint equation for the optical flow problem. Once the vector field and then a discrete transfer operator are found, then, in addition, we present a graph modularity method as a tool to discover basin structure in the phase space. Together with a tool to reconstruct a velocity field, this graph-based partition method provides us with a way to study transport behavior and other ergodic properties of measurable dynamical systems captured only through image sequences.

  4. Fractal planetary rings: Energy inequalities and random field model

    NASA Astrophysics Data System (ADS)

    Malyarenko, Anatoliy; Ostoja-Starzewski, Martin

    2017-12-01

    This study is motivated by a recent observation, based on photographs from the Cassini mission, that Saturn’s rings have a fractal structure in radial direction. Accordingly, two questions are considered: (1) What Newtonian mechanics argument in support of such a fractal structure of planetary rings is possible? (2) What kinematics model of such fractal rings can be formulated? Both challenges are based on taking planetary rings’ spatial structure as being statistically stationary in time and statistically isotropic in space, but statistically nonstationary in space. An answer to the first challenge is given through an energy analysis of circular rings having a self-generated, noninteger-dimensional mass distribution [V. E. Tarasov, Int. J. Mod Phys. B 19, 4103 (2005)]. The second issue is approached by taking the random field of angular velocity vector of a rotating particle of the ring as a random section of a special vector bundle. Using the theory of group representations, we prove that such a field is completely determined by a sequence of continuous positive-definite matrix-valued functions defined on the Cartesian square F2 of the radial cross-section F of the rings, where F is a fat fractal.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine

    This slide presentation describes basic topological concepts, including topological spaces, homeomorphisms, homotopy, betti numbers. Scalar field topology explores finding topological features and scalar field visualization, and vector field topology explores finding topological features and vector field visualization.

  6. Vectors in Use in a 3D Juggling Game Simulation

    ERIC Educational Resources Information Center

    Kynigos, Chronis; Latsi, Maria

    2006-01-01

    The new representations enabled by the educational computer game the "Juggler" can place vectors in a central role both for controlling and measuring the behaviours of objects in a virtual environment simulating motion in three-dimensional spaces. The mathematical meanings constructed by 13 year-old students in relation to vectors as…

  7. 4 × 20 Gbit/s mode division multiplexing over free space using vector modes and a q-plate mode (de)multiplexer

    NASA Astrophysics Data System (ADS)

    Milione, Giovanni; Lavery, Martin P. J.; Huang, Hao; Ren, Yongxiong; Xie, Guodong; Nguyen, Thien An; Karimi, Ebrahim; Marrucci, Lorenzo; Nolan, Daniel A.; Alfano, Robert R.; Willner, Alan E.

    2015-05-01

    Vector modes are spatial modes that have spatially inhomogeneous states of polarization, such as, radial and azimuthal polarization. They can produce smaller spot sizes and stronger longitudinal polarization components upon focusing. As a result, they are used for many applications, including optical trapping and nanoscale imaging. In this work, vector modes are used to increase the information capacity of free space optical communication via the method of optical communication referred to as mode division multiplexing. A mode (de)multiplexer for vector modes based on a liquid crystal technology referred to as a q-plate is introduced. As a proof of principle, using the mode (de)multiplexer four vector modes each carrying a 20 Gbit/s quadrature phase shift keying signal on a single wavelength channel (~1550nm), comprising an aggregate 80 Gbit/s, were transmitted ~1m over the lab table with <-16.4 dB (<2%) mode crosstalk. Bit error rates for all vector modes were measured at the forward error correction threshold with power penalties < 3.41dB.

  8. Current algebra, statistical mechanics and quantum models

    NASA Astrophysics Data System (ADS)

    Vilela Mendes, R.

    2017-11-01

    Results obtained in the past for free boson systems at zero and nonzero temperatures are revisited to clarify the physical meaning of current algebra reducible functionals which are associated to systems with density fluctuations, leading to observable effects on phase transitions. To use current algebra as a tool for the formulation of quantum statistical mechanics amounts to the construction of unitary representations of diffeomorphism groups. Two mathematical equivalent procedures exist for this purpose. One searches for quasi-invariant measures on configuration spaces, the other for a cyclic vector in Hilbert space. Here, one argues that the second approach is closer to the physical intuition when modelling complex systems. An example of application of the current algebra methodology to the pairing phenomenon in two-dimensional fermion systems is discussed.

  9. Real-Space Analysis of Scanning Tunneling Microscopy Topography Datasets Using Sparse Modeling Approach

    NASA Astrophysics Data System (ADS)

    Miyama, Masamichi J.; Hukushima, Koji

    2018-04-01

    A sparse modeling approach is proposed for analyzing scanning tunneling microscopy topography data, which contain numerous peaks originating from the electron density of surface atoms and/or impurities. The method, based on the relevance vector machine with L1 regularization and k-means clustering, enables separation of the peaks and peak center positioning with accuracy beyond the resolution of the measurement grid. The validity and efficiency of the proposed method are demonstrated using synthetic data in comparison with the conventional least-squares method. An application of the proposed method to experimental data of a metallic oxide thin-film clearly indicates the existence of defects and corresponding local lattice distortions.

  10. Direct time integration of Maxwell's equations in linear dispersive media with absorption for scattering and propagation of femtosecond electromagnetic pulses

    NASA Technical Reports Server (NTRS)

    Joseph, Rose M.; Hagness, Susan C.; Taflove, Allen

    1991-01-01

    The initial results for femtosecond pulse propagation and scattering interactions for a Lorentz medium obtained by a direct time integration of Maxwell's equations are reported. The computational approach provides reflection coefficients accurate to better than 6 parts in 10,000 over the frequency range of dc to 3 x 10 to the 16th Hz for a single 0.2-fs Gaussian pulse incident upon a Lorentz-medium half-space. New results for Sommerfeld and Brillouin precursors are shown and compared with previous analyses. The present approach is robust and permits 2D and 3D electromagnetic pulse propagation directly from the full-vector Maxwell's equations.

  11. Dengue Fever Occurrence and Vector Detection by Larval Survey, Ovitrap and MosquiTRAP: A Space-Time Clusters Analysis

    PubMed Central

    de Melo, Diogo Portella Ornelas; Scherrer, Luciano Rios; Eiras, Álvaro Eduardo

    2012-01-01

    The use of vector surveillance tools for preventing dengue disease requires fine assessment of risk, in order to improve vector control activities. Nevertheless, the thresholds between vector detection and dengue fever occurrence are currently not well established. In Belo Horizonte (Minas Gerais, Brazil), dengue has been endemic for several years. From January 2007 to June 2008, the dengue vector Aedes (Stegomyia) aegypti was monitored by ovitrap, the sticky-trap MosquiTRAP™ and larval surveys in an study area in Belo Horizonte. Using a space-time scan for clusters detection implemented in SaTScan software, the vector presence recorded by the different monitoring methods was evaluated. Clusters of vectors and dengue fever were detected. It was verified that ovitrap and MosquiTRAP vector detection methods predicted dengue occurrence better than larval survey, both spatially and temporally. MosquiTRAP and ovitrap presented similar results of space-time intersections to dengue fever clusters. Nevertheless ovitrap clusters presented longer duration periods than MosquiTRAP ones, less acuratelly signalizing the dengue risk areas, since the detection of vector clusters during most of the study period was not necessarily correlated to dengue fever occurrence. It was verified that ovitrap clusters occurred more than 200 days (values ranged from 97.0±35.35 to 283.0±168.4 days) before dengue fever clusters, whereas MosquiTRAP clusters preceded dengue fever clusters by approximately 80 days (values ranged from 65.5±58.7 to 94.0±14. 3 days), the former showing to be more temporally precise. Thus, in the present cluster analysis study MosquiTRAP presented superior results for signaling dengue transmission risks both geographically and temporally. Since early detection is crucial for planning and deploying effective preventions, MosquiTRAP showed to be a reliable tool and this method provides groundwork for the development of even more precise tools. PMID:22848729

  12. A hybrid gene selection approach for microarray data classification using cellular learning automata and ant colony optimization.

    PubMed

    Vafaee Sharbaf, Fatemeh; Mosafer, Sara; Moattar, Mohammad Hossein

    2016-06-01

    This paper proposes an approach for gene selection in microarray data. The proposed approach consists of a primary filter approach using Fisher criterion which reduces the initial genes and hence the search space and time complexity. Then, a wrapper approach which is based on cellular learning automata (CLA) optimized with ant colony method (ACO) is used to find the set of features which improve the classification accuracy. CLA is applied due to its capability to learn and model complicated relationships. The selected features from the last phase are evaluated using ROC curve and the most effective while smallest feature subset is determined. The classifiers which are evaluated in the proposed framework are K-nearest neighbor; support vector machine and naïve Bayes. The proposed approach is evaluated on 4 microarray datasets. The evaluations confirm that the proposed approach can find the smallest subset of genes while approaching the maximum accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.

    PubMed

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ d , and the dictionary is learned from the training data using the vector space structure of ℝ d and its Euclidean L 2 -metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.

  14. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning

    PubMed Central

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝd, and the dictionary is learned from the training data using the vector space structure of ℝd and its Euclidean L2-metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis. PMID:24129583

  15. Adaptive Hybrid Picture Coding. Volume 2.

    DTIC Science & Technology

    1985-02-01

    ooo5 V.a Measurement Vector ..eho..............57 V.b Size Variable o .entroi* Vector .......... .- 59 V * c Shape Vector .Ř 0-60o oe 6 I V~d...the Program for the Adaptive Line of Sight Method .i.. 18.. o ... .... .... 1 B Details of the Feature Vector FormationProgram .. o ...oo..-....- .122 C ...shape recognition is analogous to recognition of curves in space. Therefore, well known concepts and theorems from differential geometry can be 34 . o

  16. Vehicle Based Vector Sensor

    DTIC Science & Technology

    2015-09-28

    buoyant underwater vehicle with an interior space in which a length of said underwater vehicle is equal to one tenth of the acoustic wavelength...underwater vehicle with an interior space in which a length of said underwater vehicle is equal to one tenth of the acoustic wavelength; an...unmanned underwater vehicle that can function as an acoustic vector sensor. (2) Description of the Prior Art [0004] It is known that a propagating

  17. Deep Learning for Automated Extraction of Primary Sites From Cancer Pathology Reports.

    PubMed

    Qiu, John X; Yoon, Hong-Jun; Fearn, Paul A; Tourassi, Georgia D

    2018-01-01

    Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.

  18. CR-Calculus and adaptive array theory applied to MIMO random vibration control tests

    NASA Astrophysics Data System (ADS)

    Musella, U.; Manzato, S.; Peeters, B.; Guillaume, P.

    2016-09-01

    Performing Multiple-Input Multiple-Output (MIMO) tests to reproduce the vibration environment in a user-defined number of control points of a unit under test is necessary in applications where a realistic environment replication has to be achieved. MIMO tests require vibration control strategies to calculate the required drive signal vector that gives an acceptable replication of the target. This target is a (complex) vector with magnitude and phase information at the control points for MIMO Sine Control tests while in MIMO Random Control tests, in the most general case, the target is a complete spectral density matrix. The idea behind this work is to tailor a MIMO random vibration control approach that can be generalized to other MIMO tests, e.g. MIMO Sine and MIMO Time Waveform Replication. In this work the approach is to use gradient-based procedures over the complex space, applying the so called CR-Calculus and the adaptive array theory. With this approach it is possible to better control the process performances allowing the step-by-step Jacobian Matrix update. The theoretical bases behind the work are followed by an application of the developed method to a two-exciter two-axis system and by performance comparisons with standard methods.

  19. Transmission dynamics: critical questions and challenges

    PubMed Central

    2017-01-01

    This article overviews the dynamics of disease transmission in one-host–one-parasite systems. Transmission is the result of interacting host and pathogen processes, encapsulated with the environment in a ‘transmission triangle’. Multiple transmission modes and their epidemiological consequences are often not understood because the direct measurement of transmission is difficult. However, its different components can be analysed using nonlinear transmission functions, contact matrices and networks. A particular challenge is to develop such functions for spatially extended systems. This is illustrated for vector transmission where a ‘perception kernel’ approach is developed that incorporates vector behaviour in response to host spacing. A major challenge is understanding the relative merits of the large number of approaches to quantifying transmission. The evolution of transmission mode itself has been a rather neglected topic, but is important in the context of understanding disease emergence and genetic variation in pathogens. Disease impacts many biological processes such as community stability, the evolution of sex and speciation, yet the importance of different transmission modes in these processes is not understood. Broader approaches and ideas to disease transmission are important in the public health realm for combating newly emerging infections. This article is part of the themed issue ‘Opening the black box: re-examining the ecology and evolution of parasite transmission’. PMID:28289255

  20. A general theory of linear cosmological perturbations: stability conditions, the quasistatic limit and dynamics

    NASA Astrophysics Data System (ADS)

    Lagos, Macarena; Bellini, Emilio; Noller, Johannes; Ferreira, Pedro G.; Baker, Tessa

    2018-03-01

    We analyse cosmological perturbations around a homogeneous and isotropic background for scalar-tensor, vector-tensor and bimetric theories of gravity. Building on previous results, we propose a unified view of the effective parameters of all these theories. Based on this structure, we explore the viable space of parameters for each family of models by imposing the absence of ghosts and gradient instabilities. We then focus on the quasistatic regime and confirm that all these theories can be approximated by the phenomenological two-parameter model described by an effective Newton's constant and the gravitational slip. Within the quasistatic regime we pinpoint signatures which can distinguish between the broad classes of models (scalar-tensor, vector-tensor or bimetric). Finally, we present the equations of motion for our unified approach in such a way that they can be implemented in Einstein-Boltzmann solvers.

  1. Space station proximity operations windows: Human factors design guidelines

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.

    1987-01-01

    Proximity operations refers to all activities outside the Space Station which take place within a 1-km radius. Since there will be a large number of different operations involving manned and unmanned vehicles, single- and multiperson crews, automated and manually controlled flight, a wide variety of cargo, and construction/repair activities, accurate and continuous human monitoring of these operations from a specially designed control station on Space Station will be required. Total situational awareness will be required. This paper presents numerous human factors design guidelines and related background information for control windows which will support proximity operations. Separate sections deal with natural and artificial illumination geometry; all basic rendezvous vector approaches; window field-of-view requirements; window size; shape and placement criteria; window optical characteristics as they relate to human perception; maintenance and protection issues; and a comprehensive review of windows installed on U.S. and U.S.S.R. manned vehicles.

  2. Four-dimensional gravity as an almost-Poisson system

    NASA Astrophysics Data System (ADS)

    Ita, Eyo Eyo

    2015-04-01

    In this paper, we examine the phase space structure of a noncanonical formulation of four-dimensional gravity referred to as the Instanton representation of Plebanski gravity (IRPG). The typical Hamiltonian (symplectic) approach leads to an obstruction to the definition of a symplectic structure on the full phase space of the IRPG. We circumvent this obstruction, using the Lagrange equations of motion, to find the appropriate generalization of the Poisson bracket. It is shown that the IRPG does not support a Poisson bracket except on the vector constraint surface. Yet there exists a fundamental bilinear operation on its phase space which produces the correct equations of motion and induces the correct transformation properties of the basic fields. This bilinear operation is known as the almost-Poisson bracket, which fails to satisfy the Jacobi identity and in this case also the condition of antisymmetry. We place these results into the overall context of nonsymplectic systems.

  3. Novel strategy to implement active-space coupled-cluster methods

    NASA Astrophysics Data System (ADS)

    Rolik, Zoltán; Kállay, Mihály

    2018-03-01

    A new approach is presented for the efficient implementation of coupled-cluster (CC) methods including higher excitations based on a molecular orbital space partitioned into active and inactive orbitals. In the new framework, the string representation of amplitudes and intermediates is used as long as it is beneficial, but the contractions are evaluated as matrix products. Using a new diagrammatic technique, the CC equations are represented in a compact form due to the string notations we introduced. As an application of these ideas, a new automated implementation of the single-reference-based multi-reference CC equations is presented for arbitrary excitation levels. The new program can be considered as an improvement over the previous implementations in many respects; e.g., diagram contributions are evaluated by efficient vectorized subroutines. Timings for test calculations for various complete active-space problems are presented. As an application of the new code, the weak interactions in the Be dimer were studied.

  4. Small massless excitations against a nontrivial background

    NASA Astrophysics Data System (ADS)

    Khariton, N. G.; Svetovoy, V. B.

    1994-03-01

    We propose a systematic approach for finding bosonic zero modes of nontrivial classical solutions in a gauge theory. The method allows us to find all the modes connected with the broken space-time and gauge symmetries. The ground state is supposed to be dependent on some space coordinates yα and independent of the rest of the coordinates xi. The main problem which is solved is how to construct the zero modes corresponding to the broken xiyα rotations in vacuum and which boundary conditions specify them. It is found that the rotational modes are typically singular at the origin or at infinity, but their energy remains finite. They behave as massless vector fields in x space. We analyze local and global symmetries affecting the zero modes. An algorithm for constructing the zero mode excitations is formulated. The main results are illustrated in the Abelian Higgs model with the string background.

  5. Killing-Yano tensors in spaces admitting a hypersurface orthogonal Killing vector

    NASA Astrophysics Data System (ADS)

    Garfinkle, David; Glass, E. N.

    2013-03-01

    Methods are presented for finding Killing-Yano tensors, conformal Killing-Yano tensors, and conformal Killing vectors in spacetimes with a hypersurface orthogonal Killing vector. These methods are similar to a method developed by the authors for finding Killing tensors. In all cases one decomposes both the tensor and the equation it satisfies into pieces along the Killing vector and pieces orthogonal to the Killing vector. Solving the separate equations that result from this decomposition requires less computing than integrating the original equation. In each case, examples are given to illustrate the method.

  6. RNAi in Arthropods: Insight into the Machinery and Applications for Understanding the Pathogen-Vector Interface

    PubMed Central

    Barnard, Annette-Christi; Nijhof, Ard M.; Fick, Wilma; Stutzer, Christian; Maritz-Olivier, Christine

    2012-01-01

    The availability of genome sequencing data in combination with knowledge of expressed genes via transcriptome and proteome data has greatly advanced our understanding of arthropod vectors of disease. Not only have we gained insight into vector biology, but also into their respective vector-pathogen interactions. By combining the strengths of postgenomic databases and reverse genetic approaches such as RNAi, the numbers of available drug and vaccine targets, as well as number of transgenes for subsequent transgenic or paratransgenic approaches, have expanded. These are now paving the way for in-field control strategies of vectors and their pathogens. Basic scientific questions, such as understanding the basic components of the vector RNAi machinery, is vital, as this allows for the transfer of basic RNAi machinery components into RNAi-deficient vectors, thereby expanding the genetic toolbox of these RNAi-deficient vectors and pathogens. In this review, we focus on the current knowledge of arthropod vector RNAi machinery and the impact of RNAi on understanding vector biology and vector-pathogen interactions for which vector genomic data is available on VectorBase. PMID:24705082

  7. Embedding of multidimensional time-dependent observations.

    PubMed

    Barnard, J P; Aldrich, C; Gerber, M

    2001-10-01

    A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.

  8. Embedding of multidimensional time-dependent observations

    NASA Astrophysics Data System (ADS)

    Barnard, Jakobus P.; Aldrich, Chris; Gerber, Marius

    2001-10-01

    A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.

  9. Foundation Mathematics for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-03-01

    1. Arithmetic and geometry; 2. Preliminary algebra; 3. Differential calculus; 4. Integral calculus; 5. Complex numbers and hyperbolic functions; 6. Series and limits; 7. Partial differentiation; 8. Multiple integrals; 9. Vector algebra; 10. Matrices and vector spaces; 11. Vector calculus; 12. Line, surface and volume integrals; 13. Laplace transforms; 14. Ordinary differential equations; 15. Elementary probability; Appendices; Index.

  10. Student Solution Manual for Foundation Mathematics for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-03-01

    1. Arithmetic and geometry; 2. Preliminary algebra; 3. Differential calculus; 4. Integral calculus; 5. Complex numbers and hyperbolic functions; 6. Series and limits; 7. Partial differentiation; 8. Multiple integrals; 9. Vector algebra; 10. Matrices and vector spaces; 11. Vector calculus; 12. Line, surface and volume integrals; 13. Laplace transforms; 14. Ordinary differential equations; 15. Elementary probability; Appendix.

  11. Lorentz symmetric n-particle systems without ``multiple times''

    NASA Astrophysics Data System (ADS)

    Smith, Felix

    2013-05-01

    The need for multiple times in relativistic n-particle dynamics is a consequence of Minkowski's postulated symmetry between space and time coordinates in a space-time s = [x1 , . . ,x4 ] = [ x , y , z , ict ] , Eq. (1). Poincaré doubted the need for this space-time symmetry, believing Lorentz covariance could also prevail in some geometries with a three-dimensional position space and a quite different time coordinate. The Hubble expansion observed later justifies a specific geometry of this kind, a negatively curved position 3-space expanding with time at the Hubble rate lH (t) =lH , 0 + cΔt (F. T. Smith, Ann. Fond. L. de Broglie, 30, 179 (2005) and 35, 395 (2010)). Its position 4-vector is not s but q = [x1 , . . ,x4 ] = [ x , y , z , ilH (t) ] , and shows no 4-space symmetry. What is observed is always a difference 4-vector Δq = [ Δx , Δy , Δz , icΔt ] , and this displays the structure of Eq. (1) perfectly. Thus we find the standard 4-vector of special relativity in a geometry that does not require a Minkowski space-time at all, but a quite different geometry with a expanding 3-space symmetry and an independent time. The same Lorentz symmetry with but a single time extends to 2 and n-body systems.

  12. State space model approach for forecasting the use of electrical energy (a case study on: PT. PLN (Persero) district of Kroya)

    NASA Astrophysics Data System (ADS)

    Kurniati, Devi; Hoyyi, Abdul; Widiharih, Tatik

    2018-05-01

    Time series data is a series of data taken or measured based on observations at the same time interval. Time series data analysis is used to perform data analysis considering the effect of time. The purpose of time series analysis is to know the characteristics and patterns of a data and predict a data value in some future period based on data in the past. One of the forecasting methods used for time series data is the state space model. This study discusses the modeling and forecasting of electric energy consumption using the state space model for univariate data. The modeling stage is began with optimal Autoregressive (AR) order selection, determination of state vector through canonical correlation analysis, estimation of parameter, and forecasting. The result of this research shows that modeling of electric energy consumption using state space model of order 4 with Mean Absolute Percentage Error (MAPE) value 3.655%, so the model is very good forecasting category.

  13. Physics in space-time with scale-dependent metrics

    NASA Astrophysics Data System (ADS)

    Balankin, Alexander S.

    2013-10-01

    We construct three-dimensional space Rγ3 with the scale-dependent metric and the corresponding Minkowski space-time Mγ,β4 with the scale-dependent fractal (DH) and spectral (DS) dimensions. The local derivatives based on scale-dependent metrics are defined and differential vector calculus in Rγ3 is developed. We state that Mγ,β4 provides a unified phenomenological framework for dimensional flow observed in quite different models of quantum gravity. Nevertheless, the main attention is focused on the special case of flat space-time M1/3,14 with the scale-dependent Cantor-dust-like distribution of admissible states, such that DH increases from DH=2 on the scale ≪ℓ0 to DH=4 in the infrared limit ≫ℓ0, where ℓ0 is the characteristic length (e.g. the Planck length, or characteristic size of multi-fractal features in heterogeneous medium), whereas DS≡4 in all scales. Possible applications of approach based on the scale-dependent metric to systems of different nature are briefly discussed.

  14. A link between torse-forming vector fields and rotational hypersurfaces

    NASA Astrophysics Data System (ADS)

    Chen, Bang-Yen; Verstraelen, Leopold

    Torse-forming vector fields introduced by Yano [On torse forming direction in a Riemannian space, Proc. Imp. Acad. Tokyo 20 (1944) 340-346] are natural extension of concurrent and concircular vector fields. Such vector fields have many nice applications to geometry and mathematical physics. In this paper, we establish a link between rotational hypersurfaces and torse-forming vector fields. More precisely, our main result states that, for a hypersurface M of 𝔼n+1 with n ≥ 3, the tangential component xT of the position vector field of M is a proper torse-forming vector field on M if and only if M is contained in a rotational hypersurface whose axis of rotation contains the origin.

  15. Kernel PLS-SVC for Linear and Nonlinear Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Matthews, Bryan

    2003-01-01

    A new methodology for discrimination is proposed. This is based on kernel orthonormalized partial least squares (PLS) dimensionality reduction of the original data space followed by support vector machines for classification. Close connection of orthonormalized PLS and Fisher's approach to linear discrimination or equivalently with canonical correlation analysis is described. This gives preference to use orthonormalized PLS over principal component analysis. Good behavior of the proposed method is demonstrated on 13 different benchmark data sets and on the real world problem of the classification finger movement periods versus non-movement periods based on electroencephalogram.

  16. Vector autoregressive models: A Gini approach

    NASA Astrophysics Data System (ADS)

    Mussard, Stéphane; Ndiaye, Oumar Hamady

    2018-02-01

    In this paper, it is proven that the usual VAR models may be performed in the Gini sense, that is, on a ℓ1 metric space. The Gini regression is robust to outliers. As a consequence, when data are contaminated by extreme values, we show that semi-parametric VAR-Gini regressions may be used to obtain robust estimators. The inference about the estimators is made with the ℓ1 norm. Also, impulse response functions and Gini decompositions for prevision errors are introduced. Finally, Granger's causality tests are properly derived based on U-statistics.

  17. 1988 IEEE Aerospace Applications Conference, Park City, UT, Feb. 7-12, 1988, Digest

    NASA Astrophysics Data System (ADS)

    The conference presents papers on microwave applications, data and signal processing applications, related aerospace applications, and advanced microelectronic products for the aerospace industry. Topics include a high-performance antenna measurement system, microwave power beaming from earth to space, the digital enhancement of microwave component performance, and a GaAs vector processor based on parallel RISC microprocessors. Consideration is also given to unique techniques for reliable SBNR architectures, a linear analysis subsystem for CSSL-IV, and a structured singular value approach to missile autopilot analysis.

  18. Discrete- vs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data.

    PubMed

    de Haan-Rietdijk, Silvia; Voelkle, Manuel C; Keijsers, Loes; Hamaker, Ellen L

    2017-01-01

    The Experience Sampling Method is a common approach in psychological research for collecting intensive longitudinal data with high ecological validity. One characteristic of ESM data is that it is often unequally spaced, because the measurement intervals within a day are deliberately varied, and measurement continues over several days. This poses a problem for discrete-time (DT) modeling approaches, which are based on the assumption that all measurements are equally spaced. Nevertheless, DT approaches such as (vector) autoregressive modeling are often used to analyze ESM data, for instance in the context of affective dynamics research. There are equivalent continuous-time (CT) models, but they are more difficult to implement. In this paper we take a pragmatic approach and evaluate the practical relevance of the violated model assumption in DT AR(1) and VAR(1) models, for the N = 1 case. We use simulated data under an ESM measurement design to investigate the bias in the parameters of interest under four different model implementations, ranging from the true CT model that accounts for all the exact measurement times, to the crudest possible DT model implementation, where even the nighttime is treated as a regular interval. An analysis of empirical affect data illustrates how the differences between DT and CT modeling can play out in practice. We find that the size and the direction of the bias in DT (V)AR models for unequally spaced ESM data depend quite strongly on the true parameter in addition to data characteristics. Our recommendation is to use CT modeling whenever possible, especially now that new software implementations have become available.

  19. Discrete- vs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data

    PubMed Central

    de Haan-Rietdijk, Silvia; Voelkle, Manuel C.; Keijsers, Loes; Hamaker, Ellen L.

    2017-01-01

    The Experience Sampling Method is a common approach in psychological research for collecting intensive longitudinal data with high ecological validity. One characteristic of ESM data is that it is often unequally spaced, because the measurement intervals within a day are deliberately varied, and measurement continues over several days. This poses a problem for discrete-time (DT) modeling approaches, which are based on the assumption that all measurements are equally spaced. Nevertheless, DT approaches such as (vector) autoregressive modeling are often used to analyze ESM data, for instance in the context of affective dynamics research. There are equivalent continuous-time (CT) models, but they are more difficult to implement. In this paper we take a pragmatic approach and evaluate the practical relevance of the violated model assumption in DT AR(1) and VAR(1) models, for the N = 1 case. We use simulated data under an ESM measurement design to investigate the bias in the parameters of interest under four different model implementations, ranging from the true CT model that accounts for all the exact measurement times, to the crudest possible DT model implementation, where even the nighttime is treated as a regular interval. An analysis of empirical affect data illustrates how the differences between DT and CT modeling can play out in practice. We find that the size and the direction of the bias in DT (V)AR models for unequally spaced ESM data depend quite strongly on the true parameter in addition to data characteristics. Our recommendation is to use CT modeling whenever possible, especially now that new software implementations have become available. PMID:29104554

  20. Vibration-based damage detection in a concrete beam under temperature variations using AR models and state-space approaches

    NASA Astrophysics Data System (ADS)

    Clément, A.; Laurens, S.

    2011-07-01

    The Structural Health Monitoring of civil structures subjected to ambient vibrations is very challenging. Indeed, the variations of environmental conditions and the difficulty to characterize the excitation make the damage detection a hard task. Auto-regressive (AR) models coefficients are often used as damage sensitive feature. The presented work proposes a comparison of the AR approach with a state-space feature formed by the Jacobian matrix of the dynamical process. Since the detection of damage can be formulated as a novelty detection problem, Mahalanobis distance is applied to track new points from an undamaged reference collection of feature vectors. Data from a concrete beam subjected to temperature variations and damaged by several static loading are analyzed. It is observed that the damage sensitive features are effectively sensitive to temperature variations. However, the use of the Mahalanobis distance makes possible the detection of cracking with both of them. Early damage (before cracking) is only revealed by the AR coefficients with a good sensibility.

  1. A new scheme for perturbative triples correction to (0,1) sector of Fock space multi-reference coupled cluster method: theory, implementation, and examples.

    PubMed

    Dutta, Achintya Kumar; Vaval, Nayana; Pal, Sourav

    2015-01-28

    We propose a new elegant strategy to implement third order triples correction in the light of many-body perturbation theory to the Fock space multi-reference coupled cluster method for the ionization problem. The computational scaling as well as the storage requirement is of key concerns in any many-body calculations. Our proposed approach scales as N(6) does not require the storage of triples amplitudes and gives superior agreement over all the previous attempts made. This approach is capable of calculating multiple roots in a single calculation in contrast to the inclusion of perturbative triples in the equation of motion variant of the coupled cluster theory, where each root needs to be computed in a state-specific way and requires both the left and right state vectors together. The performance of the newly implemented scheme is tested by applying to methylene, boron nitride (B2N) anion, nitrogen, water, carbon monoxide, acetylene, formaldehyde, and thymine monomer, a DNA base.

  2. Simultaneous estimation of multiple phases in digital holographic interferometry using state space analysis

    NASA Astrophysics Data System (ADS)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-05-01

    A new approach is proposed for the multiple phase estimation from a multicomponent exponential phase signal recorded in multi-beam digital holographic interferometry. It is capable of providing multidimensional measurements in a simultaneous manner from a single recording of the exponential phase signal encoding multiple phases. Each phase within a small window around each pixel is appproximated with a first order polynomial function of spatial coordinates. The problem of accurate estimation of polynomial coefficients, and in turn the unwrapped phases, is formulated as a state space analysis wherein the coefficients and signal amplitudes are set as the elements of a state vector. The state estimation is performed using the extended Kalman filter. An amplitude discrimination criterion is utilized in order to unambiguously estimate the coefficients associated with the individual signal components. The performance of proposed method is stable over a wide range of the ratio of signal amplitudes. The pixelwise phase estimation approach of the proposed method allows it to handle the fringe patterns that may contain invalid regions.

  3. Conformational and functional analysis of molecular dynamics trajectories by Self-Organising Maps

    PubMed Central

    2011-01-01

    Background Molecular dynamics (MD) simulations are powerful tools to investigate the conformational dynamics of proteins that is often a critical element of their function. Identification of functionally relevant conformations is generally done clustering the large ensemble of structures that are generated. Recently, Self-Organising Maps (SOMs) were reported performing more accurately and providing more consistent results than traditional clustering algorithms in various data mining problems. We present a novel strategy to analyse and compare conformational ensembles of protein domains using a two-level approach that combines SOMs and hierarchical clustering. Results The conformational dynamics of the α-spectrin SH3 protein domain and six single mutants were analysed by MD simulations. The Cα's Cartesian coordinates of conformations sampled in the essential space were used as input data vectors for SOM training, then complete linkage clustering was performed on the SOM prototype vectors. A specific protocol to optimize a SOM for structural ensembles was proposed: the optimal SOM was selected by means of a Taguchi experimental design plan applied to different data sets, and the optimal sampling rate of the MD trajectory was selected. The proposed two-level approach was applied to single trajectories of the SH3 domain independently as well as to groups of them at the same time. The results demonstrated the potential of this approach in the analysis of large ensembles of molecular structures: the possibility of producing a topological mapping of the conformational space in a simple 2D visualisation, as well as of effectively highlighting differences in the conformational dynamics directly related to biological functions. Conclusions The use of a two-level approach combining SOMs and hierarchical clustering for conformational analysis of structural ensembles of proteins was proposed. It can easily be extended to other study cases and to conformational ensembles from other sources. PMID:21569575

  4. Vector Magnetograph Design

    NASA Technical Reports Server (NTRS)

    Chipman, Russell A.

    1996-01-01

    This report covers work performed during the period of November 1994 through March 1996 on the design of a Space-borne Solar Vector Magnetograph. This work has been performed as part of a design team under the supervision of Dr. Mona Hagyard and Dr. Alan Gary of the Space Science Laboratory. Many tasks were performed and this report documents the results from some of those tasks, each contained in the corresponding appendix. Appendices are organized in chronological order.

  5. The Absolute Vector Magnetometers on Board Swarm, Lessons Learned From Two Years in Space.

    NASA Astrophysics Data System (ADS)

    Hulot, G.; Leger, J. M.; Vigneron, P.; Brocco, L.; Olsen, N.; Jager, T.; Bertrand, F.; Fratter, I.; Sirol, O.; Lalanne, X.

    2015-12-01

    ESA's Swarm satellites carry 4He absolute magnetometers (ASM), designed by CEA-Léti and developed in partnership with CNES. These instruments are the first-ever space-born magnetometers to use a common sensor to simultaneously deliver 1Hz independent absolute scalar and vector readings of the magnetic field. They have provided the very high accuracy scalar field data nominally required by the mission (for both science and calibration purposes, since each satellite also carries a low noise high frequency fluxgate magnetometer designed by DTU), but also very useful experimental absolute vector data. In this presentation, we will report on the status of the instruments, as well as on the various tests and investigations carried out using these experimental data since launch in November 2013. In particular, we will illustrate the advantages of flying ASM instruments on space-born magnetic missions for nominal data quality checks, geomagnetic field modeling and science objectives.

  6. Realistic Covariance Prediction for the Earth Science Constellation

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.

  7. Laplace-Runge-Lenz vector in quantum mechanics in noncommutative space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gáliková, Veronika; Kováčik, Samuel; Prešnajder, Peter

    2013-12-15

    The main point of this paper is to examine a “hidden” dynamical symmetry connected with the conservation of Laplace-Runge-Lenz vector (LRL) in the hydrogen atom problem solved by means of non-commutative quantum mechanics (NCQM). The basic features of NCQM will be introduced to the reader, the key one being the fact that the notion of a point, or a zero distance in the considered configuration space, is abandoned and replaced with a “fuzzy” structure in such a way that the rotational invariance is preserved. The main facts about the conservation of LRL vector in both classical and quantum theory willmore » be reviewed. Finally, we will search for an analogy in the NCQM, provide our results and their comparison with the QM predictions. The key notions we are going to deal with are non-commutative space, Coulomb-Kepler problem, and symmetry.« less

  8. Adjustable vector Airy light-sheet single optical tweezers: negative radiation forces on a subwavelength spheroid and spin torque reversal

    NASA Astrophysics Data System (ADS)

    Mitri, Farid G.

    2018-01-01

    Generalized solutions of vector Airy light-sheets, adjustable per their derivative order m, are introduced stemming from the Lorenz gauge condition and Maxwell's equations using the angular spectrum decomposition method. The Cartesian components of the incident radiated electric, magnetic and time-averaged Poynting vector fields in free space (excluding evanescent waves) are determined and computed with particular emphasis on the derivative order of the Airy light-sheet and the polarization on the magnetic vector potential forming the beam. Negative transverse time-averaged Poynting vector components can arise, while the longitudinal counterparts are always positive. Moreover, the analysis is extended to compute the optical radiation force and spin torque vector components on a lossless dielectric prolate subwavelength spheroid in the framework of the electric dipole approximation. The results show that negative forces and spin torques sign reversal arise depending on the derivative order of the beam, the polarization of the magnetic vector potential, and the orientation of the subwavelength prolate spheroid in space. The spin torque sign reversal suggests that counter-clockwise or clockwise rotations around the center of mass of the subwavelength spheroid can occur. The results find useful applications in single Airy light-sheet tweezers, particle manipulation, handling, and rotation applications to name a few examples.

  9. Distance between RBS and AUG plays an important role in overexpression of recombinant proteins.

    PubMed

    Berwal, Sunil K; Sreejith, R K; Pal, Jayanta K

    2010-10-15

    The spacing between ribosome binding site (RBS) and AUG is crucial for efficient overexpression of genes when cloned in prokaryotic expression vectors. We undertook a brief study on the overexpression of genes cloned in Escherichia coli expression vectors, wherein the spacing between the RBS and the start codon was varied. SDS-PAGE and Western blot analysis indicated a high level of protein expression only in constructs where the spacing between RBS and AUG was approximately 40 nucleotides or more, despite the synthesis of the transcripts in the representative cases investigated. Copyright 2010 Elsevier Inc. All rights reserved.

  10. The development of vector based 2.5D print methods for a painting machine

    NASA Astrophysics Data System (ADS)

    Parraman, Carinna

    2013-02-01

    Through recent trends in the application of digitally printed decorative finishes to products, CAD, 3D additive layer manufacturing and research in material perception, [1, 2] there is a growing interest in the accurate rendering of materials and tangible displays. Although current advances in colour management and inkjet printing has meant that users can take for granted high-quality colour and resolution in their printed images, digital methods for transferring a photographic coloured image from screen to paper is constrained by pixel count, file size, colorimetric conversion between colour spaces and the gamut limits of input and output devices. This paper considers new approaches to applying alternative colour palettes by using a vector-based approach through the application of paint mixtures, towards what could be described as a 2.5D printing method. The objective is to not apply an image to a textured surface, but where texture and colour are integral to the mark, that like a brush, delineates the contours in the image. The paper describes the difference between the way inks and paints are mixed and applied. When transcribing the fluid appearance of a brush stroke, there is a difference between a halftone printed mark and a painted mark. The issue of surface quality is significant to subjective qualities when studying the appearance of ink or paint on paper. The paper provides examples of a range of vector marks that are then transcribed into brush stokes by the painting machine.

  11. Feature generation using genetic programming with application to fault classification.

    PubMed

    Guo, Hong; Jack, Lindsay B; Nandi, Asoke K

    2005-02-01

    One of the major challenges in pattern recognition problems is the feature extraction process which derives new features from existing features, or directly from raw data in order to reduce the cost of computation during the classification process, while improving classifier efficiency. Most current feature extraction techniques transform the original pattern vector into a new vector with increased discrimination capability but lower dimensionality. This is conducted within a predefined feature space, and thus, has limited searching power. Genetic programming (GP) can generate new features from the original dataset without prior knowledge of the probabilistic distribution. In this paper, a GP-based approach is developed for feature extraction from raw vibration data recorded from a rotating machine with six different conditions. The created features are then used as the inputs to a neural classifier for the identification of six bearing conditions. Experimental results demonstrate the ability of GP to discover autimatically the different bearing conditions using features expressed in the form of nonlinear functions. Furthermore, four sets of results--using GP extracted features with artificial neural networks (ANN) and support vector machines (SVM), as well as traditional features with ANN and SVM--have been obtained. This GP-based approach is used for bearing fault classification for the first time and exhibits superior searching power over other techniques. Additionaly, it significantly reduces the time for computation compared with genetic algorithm (GA), therefore, makes a more practical realization of the solution.

  12. Two-dimensional PCA-based human gait identification

    NASA Astrophysics Data System (ADS)

    Chen, Jinyan; Wu, Rongteng

    2012-11-01

    It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.

  13. Algebraic and radical potential fields. Stability domains in coordinate and parametric space

    NASA Astrophysics Data System (ADS)

    Uteshev, Alexei Yu.

    2018-05-01

    A dynamical system d X/d t = F(X; A) is treated where F(X; A) is a polynomial (or some general type of radical contained) function in the vectors of state variables X ∈ ℝn and parameters A ∈ ℝm. We are looking for stability domains in both spaces, i.e. (a) domain ℙ ⊂ ℝm such that for any parameter vector specialization A ∈ ℙ, there exists a stable equilibrium for the dynamical system, and (b) domain 𝕊 ⊂ ℝn such that any point X* ∈ 𝕊 could be made a stable equilibrium by a suitable specialization of the parameter vector A.

  14. Enhanced secure 4-D modulation space optical multi-carrier system based on joint constellation and Stokes vector scrambling.

    PubMed

    Liu, Bo; Zhang, Lijia; Xin, Xiangjun

    2018-03-19

    This paper proposes and demonstrates an enhanced secure 4-D modulation optical generalized filter bank multi-carrier (GFBMC) system based on joint constellation and Stokes vector scrambling. The constellation and Stokes vectors are scrambled by using different scrambling parameters. A multi-scroll Chua's circuit map is adopted as the chaotic model. Large secure key space can be obtained due to the multi-scroll attractors and independent operability of subcarriers. A 40.32Gb/s encrypted optical GFBMC signal with 128 parallel subcarriers is successfully demonstrated in the experiment. The results show good resistance against the illegal receiver and indicate a potential way for the future optical multi-carrier system.

  15. Direct k-space imaging of Mahan cones at clean and Bi-covered Cu(111) surfaces

    NASA Astrophysics Data System (ADS)

    Winkelmann, Aimo; Akin Ünal, A.; Tusche, Christian; Ellguth, Martin; Chiang, Cheng-Tien; Kirschner, Jürgen

    2012-08-01

    Using a specifically tailored experimental approach, we revisit the exemplary effect of photoemission from quasi-free electronic states in crystals. Applying a momentum microscope, we measure photoelectron momentum patterns emitted into the complete half-space above the sample after excitation from a linearly polarized laser light source. By the application of a fully three-dimensional (3D) geometrical model of direct optical transitions, we explain the characteristic intensity distributions that are formed by the photoelectrons in k-space under the combination of energy conservation and crystal momentum conservation in the 3D bulk as well as at the two-dimensional (2D) surface. For bismuth surface alloys on Cu(111), the energy-resolved photoelectron momentum patterns allow us to identify specific emission processes in which bulk excited electrons are subsequently diffracted by an atomic 2D surface grating. The polarization dependence of the observed intensity features in momentum space is explained based on the different relative orientations of characteristic reciprocal space directions with respect to the electric field vector of the incident light.

  16. Regularized estimation of Euler pole parameters

    NASA Astrophysics Data System (ADS)

    Aktuğ, Bahadir; Yildirim, Ömer

    2013-07-01

    Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.

  17. Terrorism/Criminalogy/Sociology via Magnetism-Hamiltonian ``Models''?!: Black Swans; What Secrets Lie Buried in Magnetism?; ``Magnetism Will Conquer the Universe?''(Charles Middleton, aka ``His Imperial Majesty The Emperior Ming `The Merciless!!!''

    NASA Astrophysics Data System (ADS)

    Carrott, Anthony; Siegel, Edward Carl-Ludwig; Hoover, John-Edgar; Ness, Elliott

    2013-03-01

    Terrorism/Criminalogy//Sociology : non-Linear applied-mathematician (``nose-to-the grindstone / ``gearheadism'') ''modelers'': Worden,, Short, ...criminologists/counter-terrorists/sociologists confront [SIAM Conf. on Nonlinearity, Seattle(12); Canadian Sociology Conf,. Burnaby(12)]. ``The `Sins' of the Fathers Visited Upon the Sons'': Zeno vs Ising vs Heisenberg vs Stoner vs Hubbard vs Siegel ''SODHM''(But NO Y!!!) vs ...??? Magntism and it turn are themselves confronted BY MAGNETISM,via relatively magnetism/metal-insulator conductivity / percolation-phase-transitions critical-phenomena -illiterate non-linear applied-mathematician (nose-to-the-grindstone/ ``gearheadism'')''modelers''. What Secrets Lie Buried in Magnetism?; ``Magnetism Will Conquer the Universe!!!''[Charles Middleton, aka ``His Imperial Majesty The Emperior Ming `The Merciless!!!']'' magnetism-Hamiltonian phase-transitions percolation-``models''!: Zeno(~2350 BCE) to Peter the Pilgrim(1150) to Gilbert(1600) to Faraday(1815-1820) to Tate (1870-1880) to Ewing(1882) hysteresis to Barkhausen(1885) to Curie(1895)-Weiss(1895) to Ising-Lenz(r-space/Localized-Scalar/ Discrete/1911) to Heisenberg(r-space/localized-vector/discrete/1927) to Priesich(1935) to Stoner (electron/k-space/ itinerant-vector/discrete/39) to Stoner-Wohlfarth (technical-magnetism hysteresis /r-space/ itinerant-vector/ discrete/48) to Hubbard-Longuet-Higgins (k-space versus r-space/

  18. Space Science

    NASA Image and Video Library

    1990-10-01

    Using the Solar Vector Magnetograph, a solar observation facility at NASA's Marshall Space Flight Center (MSFC), scientists from the National Space Science and Technology Center (NSSTC) in Huntsville, Alabama, are monitoring the explosive potential of magnetic areas of the Sun. This effort could someday lead to better prediction of severe space weather, a phenomenon that occurs when blasts of particles and magnetic fields from the Sun impact the magnetosphere, the magnetic bubble around the Earth. When massive solar explosions, known as coronal mass ejections, blast through the Sun's outer atmosphere and plow toward Earth at speeds of thousands of miles per second, the resulting effects can be harmful to communication satellites and astronauts outside the Earth's magnetosphere. Like severe weather on Earth, severe space weather can be costly. On the ground, the magnetic storm wrought by these solar particles can knock out electric power. The researchers from MSFC and NSSTC's solar physics group develop instruments for measuring magnetic fields on the Sun. With these instruments, the group studies the origin, structure, and evolution of the solar magnetic field and the impact it has on Earth's space environment. This photograph shows the Solar Vector Magnetograph and Dr. Mona Hagyard of MSFC, the director of the observatory who leads the development, operation and research program of the Solar Vector Magnetograph.

  19. Method and system for efficient video compression with low-complexity encoder

    NASA Technical Reports Server (NTRS)

    Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)

    2012-01-01

    Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.

  20. Stable computations with flat radial basis functions using vector-valued rational approximations

    NASA Astrophysics Data System (ADS)

    Wright, Grady B.; Fornberg, Bengt

    2017-02-01

    One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.

  1. Dengue vector control: present status and future prospects.

    PubMed

    Yap, H H; Chong, N L; Foo, A E; Lee, C Y

    1994-12-01

    Dengue Fever (DF) and Dengue Haemorrhagic Fever (DHF) have been the most common urban diseases in Southeast Asia since the 1950s. More recently, the diseases have spread to Central and South America and are now considered as worldwide diseases. Both Aedes aegypti and Aedes albopictus are involved in the transmission of DF/DHF in Southeast Asian region. The paper discusses the present status and future prospects of Aedes control with reference to the Malaysian experience. Vector control approaches which include source reduction and environmental management, larviciding with the use of chemicals (synthetic insecticides and insect growth regulators and microbial insecticide), and adulticiding which include personal protection measures (household insecticide products and repellents) for long-term control and space spray (both thermal fogging and ultra low volume sprays) as short-term epidemic measures are discussed. The potential incorporation of IGRs and Bacillus thuringiensis-14 (Bti) as larvicides in addition to insecticides (temephos) is discussed. The advantages of using water-based spray over the oil-based (diesel) spray and the use of spray formulation which provide both larvicidal and adulticidal effects that would consequently have greater impact on the overall vector and disease control in DF/DHF are highlighted.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Bipasha; Davies, C. T. H.; Donald, G. C.

    Here, we compare correlators for pseudoscalar and vector mesons made from valence strange quarks using the clover quark and highly improved staggered quark (HISQ) formalisms in full lattice QCD. We use fully nonperturbative methods to normalise vector and axial vector current operators made from HISQ quarks, clover quarks and from combining HISQ and clover fields. This allows us to test expectations for the renormalisation factors based on perturbative QCD, with implications for the error budget of lattice QCD calculations of the matrix elements of clover-staggeredmore » $b$-light weak currents, as well as further HISQ calculations of the hadronic vacuum polarisation. We also compare the approach to the (same) continuum limit in clover and HISQ formalisms for the mass and decay constant of the $$\\phi$$ meson. Our final results for these parameters, using single-meson correlators and neglecting quark-line disconnected diagrams are: $$m_{\\phi} =$$ 1.023(5) GeV and $$f_{\\phi} = $$ 0.238(3) GeV in good agreement with experiment. These results come from calculations in the HISQ formalism using gluon fields that include the effect of $u$, $d$, $s$ and $c$ quarks in the sea with three lattice spacing values and $$m_{u/d}$$ values going down to the physical point.« less

  3. A feature selection approach towards progressive vector transmission over the Internet

    NASA Astrophysics Data System (ADS)

    Miao, Ru; Song, Jia; Feng, Min

    2017-09-01

    WebGIS has been applied for visualizing and sharing geospatial information popularly over the Internet. In order to improve the efficiency of the client applications, the web-based progressive vector transmission approach is proposed. Important features should be selected and transferred firstly, and the methods for measuring the importance of features should be further considered in the progressive transmission. However, studies on progressive transmission for large-volume vector data have mostly focused on map generalization in the field of cartography, but rarely discussed on the selection of geographic features quantitatively. This paper applies information theory for measuring the feature importance of vector maps. A measurement model for the amount of information of vector features is defined based upon the amount of information for dealing with feature selection issues. The measurement model involves geometry factor, spatial distribution factor and thematic attribute factor. Moreover, a real-time transport protocol (RTP)-based progressive transmission method is then presented to improve the transmission of vector data. To clearly demonstrate the essential methodology and key techniques, a prototype for web-based progressive vector transmission is presented, and an experiment of progressive selection and transmission for vector features is conducted. The experimental results indicate that our approach clearly improves the performance and end-user experience of delivering and manipulating large vector data over the Internet.

  4. Consolidating tactical planning and implementation frameworks for integrated vector management in Uganda.

    PubMed

    Okia, Michael; Okui, Peter; Lugemwa, Myers; Govere, John M; Katamba, Vincent; Rwakimari, John B; Mpeka, Betty; Chanda, Emmanuel

    2016-04-14

    Integrated vector management (IVM) is the recommended approach for controlling some vector-borne diseases (VBD). In the face of current challenges to disease vector control, IVM is vital to achieve national targets set for VBD control. Though global efforts, especially for combating malaria, now focus on elimination and eradication, IVM remains useful for Uganda which is principally still in the control phase of the malaria continuum. This paper outlines the processes undertaken to consolidate tactical planning and implementation frameworks for IVM in Uganda. The Uganda National Malaria Control Programme with its efforts to implement an IVM approach to vector control was the 'case' for this study. Integrated management of malaria vectors in Uganda remained an underdeveloped component of malaria control policy. In 2012, knowledge and perceptions of malaria vector control policy and IVM were assessed, and recommendations for a specific IVM policy were made. In 2014, a thorough vector control needs assessment (VCNA) was conducted according to WHO recommendations. The findings of the VCNA informed the development of the national IVM strategic guidelines. Information sources for this study included all available data and accessible archived documentary records on VBD control in Uganda. The literature was reviewed and adapted to the local context and translated into the consolidated tactical framework. WHO recommends implementation of IVM as the main strategy to vector control and has encouraged member states to adopt the approach. However, many VBD-endemic countries lack IVM policy frameworks to guide implementation of the approach. In Uganda most VBD coexists and could be managed more effectively if done in tandem. In order to successfully control malaria and other VBD and move towards their elimination, the country needs to scale up proven and effective vector control interventions and also learn from the experience of other countries. The IVM strategy is important in consolidating inter-sectoral collaboration and coordination and providing the tactical direction for effective deployment of vector control interventions along the five key elements of the approach and to align them with contemporary epidemiology of VBD in the country. Uganda has successfully established an evidence-based IVM approach and consolidated strategic planning and operational frameworks for VBD control. However, operating implementation arrangements as outlined in the national strategic guidelines for IVM and managing insecticide resistance, as well as improving vector surveillance, are imperative. In addition, strengthened information, education and communication/behaviour change and communication, collaboration and coordination will be crucial in scaling up and using vector control interventions.

  5. The Vector-Ballot Approach for Online Voting Procedures

    NASA Astrophysics Data System (ADS)

    Kiayias, Aggelos; Yung, Moti

    Looking at current cryptographic-based e-voting protocols, one can distinguish three basic design paradigms (or approaches): (a) Mix-Networks based, (b) Homomorphic Encryption based, and (c) Blind Signatures based. Each of the three possesses different advantages and disadvantages w.r.t. the basic properties of (i) efficient tallying, (ii) universal verifiability, and (iii) allowing write-in ballot capability (in addition to predetermined candidates). In fact, none of the approaches results in a scheme that simultaneously achieves all three. This is unfortunate, since the three basic properties are crucial for efficiency, integrity and versatility (flexibility), respectively. Further, one can argue that a serious business offering of voting technology should offer a flexible technology that achieves various election goals with a single user interface. This motivates our goal, which is to suggest a new "vector-ballot" based approach for secret-ballot e-voting that is based on three new notions: Provably Consistent Vector Ballot Encodings, Shrink-and-Mix Networks and Punch-Hole-Vector-Ballots. At the heart of our approach is the combination of mix networks and homomorphic encryption under a single user interface; given this, it is rather surprising that it achieves much more than any of the previous approaches for e-voting achieved in terms of the basic properties. Our approach is presented in two generic designs called "homomorphic vector-ballots with write-in votes" and "multi-candidate punch-hole vector-ballots"; both of our designs can be instantiated over any homomorphic encryption function.

  6. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    DOE PAGES

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; ...

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less

  7. Machine Learning Predictions of Molecular Properties: Accurate Many-Body Potentials and Nonlocality in Chemical Space

    PubMed Central

    2015-01-01

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies. PMID:26113956

  8. A support vector machine based test for incongruence between sets of trees in tree space

    PubMed Central

    2012-01-01

    Background The increased use of multi-locus data sets for phylogenetic reconstruction has increased the need to determine whether a set of gene trees significantly deviate from the phylogenetic patterns of other genes. Such unusual gene trees may have been influenced by other evolutionary processes such as selection, gene duplication, or horizontal gene transfer. Results Motivated by this problem we propose a nonparametric goodness-of-fit test for two empirical distributions of gene trees, and we developed the software GeneOut to estimate a p-value for the test. Our approach maps trees into a multi-dimensional vector space and then applies support vector machines (SVMs) to measure the separation between two sets of pre-defined trees. We use a permutation test to assess the significance of the SVM separation. To demonstrate the performance of GeneOut, we applied it to the comparison of gene trees simulated within different species trees across a range of species tree depths. Applied directly to sets of simulated gene trees with large sample sizes, GeneOut was able to detect very small differences between two set of gene trees generated under different species trees. Our statistical test can also include tree reconstruction into its test framework through a variety of phylogenetic optimality criteria. When applied to DNA sequence data simulated from different sets of gene trees, results in the form of receiver operating characteristic (ROC) curves indicated that GeneOut performed well in the detection of differences between sets of trees with different distributions in a multi-dimensional space. Furthermore, it controlled false positive and false negative rates very well, indicating a high degree of accuracy. Conclusions The non-parametric nature of our statistical test provides fast and efficient analyses, and makes it an applicable test for any scenario where evolutionary or other factors can lead to trees with different multi-dimensional distributions. The software GeneOut is freely available under the GNU public license. PMID:22909268

  9. Metacommunity and phylogenetic structure determine wildlife and zoonotic infectious disease patterns in time and space.

    PubMed

    Suzán, Gerardo; García-Peña, Gabriel E; Castro-Arellano, Ivan; Rico, Oscar; Rubio, André V; Tolsá, María J; Roche, Benjamin; Hosseini, Parviez R; Rizzoli, Annapaola; Murray, Kris A; Zambrana-Torrelio, Carlos; Vittecoq, Marion; Bailly, Xavier; Aguirre, A Alonso; Daszak, Peter; Prieur-Richard, Anne-Helene; Mills, James N; Guégan, Jean-Francois

    2015-02-01

    The potential for disease transmission at the interface of wildlife, domestic animals and humans has become a major concern for public health and conservation biology. Research in this subject is commonly conducted at local scales while the regional context is neglected. We argue that prevalence of infection at local and regional levels is influenced by three mechanisms occurring at the landscape level in a metacommunity context. First, (1) dispersal, colonization, and extinction of pathogens, reservoir or vector hosts, and nonreservoir hosts, may be due to stochastic and niche-based processes, thus determining distribution of all species, and then their potential interactions, across local communities (metacommunity structure). Second, (2) anthropogenic processes may drive environmental filtering of hosts, nonhosts, and pathogens. Finally, (3) phylogenetic diversity relative to reservoir or vector host(s), within and between local communities may facilitate pathogen persistence and circulation. Using a metacommunity approach, public heath scientists may better evaluate the factors that predispose certain times and places for the origin and emergence of infectious diseases. The multidisciplinary approach we describe fits within a comprehensive One Health and Ecohealth framework addressing zoonotic infectious disease outbreaks and their relationship to their hosts, other animals, humans, and the environment.

  10. Color measurement and discrimination

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    Theories of color measurement attempt to provide a quantative means for predicting whether two lights will be discriminable to an average observer. All color measurement theories can be characterized as follows: suppose lights a and b evoke responses from three color channels characterized as vectors, v(a) and v(b); the vector difference v(a) - v(b) corresponds to a set of channel responses that would be generated by some real light, call it *. According to theory a and b will be discriminable when * is detectable. A detailed development and test of the classic color measurement approach are reported. In the absence of a luminance component in the test stimuli, a and b, the theory holds well. In the presence of a luminance component, the theory is clearly false. When a luminance component is present discrimination judgements depend largely on whether the lights being discriminated fall in separate, categorical regions of color space. The results suggest that sensory estimation of surface color uses different methods, and the choice of method depends upon properties of the image. When there is significant luminance variation a categorical method is used, while in the absence of significant luminance variation judgments are continuous and consistant with the measurement approach.

  11. Applications of Aerodynamic Forces for Spacecraft Orbit Maneuverability in Operationally Responsive Space and Space Reconstitution Needs

    DTIC Science & Technology

    2012-03-01

    observation re = the radius of the Earth at the equator Pn = the Legendre polynomial 26 L = the geocentric latitude, sin The acceleration can then...atmospheric density at an altitude above an %% oblate earth given the position vector in the Geocentric Equatorial %% frame. The position vector is in...Diff between Delta and Geocentric lat rad %% GeoDtLat - Geodetic Latitude -Pi/2 to Pi/2 rad %% GeoCnLat

  12. Pure state consciousness and its local reduction to neuronal space

    NASA Astrophysics Data System (ADS)

    Duggins, A. J.

    2013-01-01

    The single neuronal state can be represented as a vector in a complex space, spanned by an orthonormal basis of integer spike counts. In this model a scalar element of experience is associated with the instantaneous firing rate of a single sensory neuron over repeated stimulus presentations. Here the model is extended to composite neural systems that are tensor products of single neuronal vector spaces. Depiction of the mental state as a vector on this tensor product space is intended to capture the unity of consciousness. The density operator is introduced as its local reduction to the single neuron level, from which the firing rate can again be derived as the objective correlate of a subjective element. However, the relational structure of perceptual experience only emerges when the non-local mental state is considered. A metric of phenomenal proximity between neuronal elements of experience is proposed, based on the cross-correlation function of neurophysiology, but constrained by the association of theoretical extremes of correlation/anticorrelation in inseparable 2-neuron states with identical and opponent elements respectively.

  13. Construction and reconstruction concept in mathematics instruction

    NASA Astrophysics Data System (ADS)

    Mumu, Jeinne; Charitas Indra Prahmana, Rully; Tanujaya, Benidiktus

    2017-12-01

    The purpose of this paper is to describe two learning activities undertaken by lecturers, so that students can understand a mathematical concept. The mathematical concept studied in this research is the Vector Space in Linear Algebra instruction. Classroom Action Research used as a research method with pre-service mathematics teacher at University of Papua as the research subject. Student participants are divided into two parallel classes, 24 students in regular class, and remedial class consist of 18 students. Both approaches, construct and reconstruction concept, are implemented on both classes. The result shows that concept construction can only be done in regular class while in remedial class, learning with concept construction approach is not able to increase students' understanding on the concept taught. Understanding the concept of a student in a remedial class can only be carried out using the concept reconstruction approach.

  14. A fosmid cloning strategy for detecting the widest possible spectrum of microbes from the international space station drinking water system.

    PubMed

    Choi, Sangdun; Chang, Mi Sook; Stuecker, Tara; Chung, Christine; Newcombe, David A; Venkateswaran, Kasthuri

    2012-12-01

    In this study, fosmid cloning strategies were used to assess the microbial populations in water from the International Space Station (ISS) drinking water system (henceforth referred to as Prebiocide and Tank A water samples). The goals of this study were: to compare the sensitivity of the fosmid cloning strategy with that of traditional culture-based and 16S rRNA-based approaches and to detect the widest possible spectrum of microbial populations during the water purification process. Initially, microbes could not be cultivated, and conventional PCR failed to amplify 16S rDNA fragments from these low biomass samples. Therefore, randomly primed rolling-circle amplification was used to amplify any DNA that might be present in the samples, followed by size selection by using pulsed-field gel electrophoresis. The amplified high-molecular-weight DNA from both samples was cloned into fosmid vectors. Several hundred clones were randomly selected for sequencing, followed by Blastn/Blastx searches. Sequences encoding specific genes from Burkholderia, a species abundant in the soil and groundwater, were found in both samples. Bradyrhizobium and Mesorhizobium, which belong to rhizobia, a large community of nitrogen fixers often found in association with plant roots, were present in the Prebiocide samples. Ralstonia, which is prevalent in soils with a high heavy metal content, was detected in the Tank A samples. The detection of many unidentified sequences suggests the presence of potentially novel microbial fingerprints. The bacterial diversity detected in this pilot study using a fosmid vector approach was higher than that detected by conventional 16S rRNA gene sequencing.

  15. Root-sum-square structural strength verification approach

    NASA Technical Reports Server (NTRS)

    Lee, Henry M.

    1994-01-01

    Utilizing a proposed fixture design or some variation thereof, this report presents a verification approach to strength test space flight payload components, electronics boxes, mechanisms, lines, fittings, etc., which traditionally do not lend themselves to classical static loading. The fixture, through use of ordered Euler rotation angles derived herein, can be mounted on existing vibration shakers and can provide an innovative method of applying single axis flight load vectors. The versatile fixture effectively loads protoflight or prototype components in all three axes simultaneously by use of a sinusoidal burst of desired magnitude at less than one-third the first resonant frequency. Cost savings along with improved hardware confidence are shown. The end product is an efficient way to verify experiment hardware for both random vibration and strength.

  16. Adaptive robust fault-tolerant control for linear MIMO systems with unmatched uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Kangkang; Jiang, Bin; Yan, Xing-Gang; Mao, Zehui

    2017-10-01

    In this paper, two novel fault-tolerant control design approaches are proposed for linear MIMO systems with actuator additive faults, multiplicative faults and unmatched uncertainties. For time-varying multiplicative and additive faults, new adaptive laws and additive compensation functions are proposed. A set of conditions is developed such that the unmatched uncertainties are compensated by actuators in control. On the other hand, for unmatched uncertainties with their projection in unmatched space being not zero, based on a (vector) relative degree condition, additive functions are designed to compensate for the uncertainties from output channels in the presence of actuator faults. The developed fault-tolerant control schemes are applied to two aircraft systems to demonstrate the efficiency of the proposed approaches.

  17. Quasiperiodic one-dimensional photonic crystals with adjustable multiple photonic bandgaps.

    PubMed

    Vyunishev, Andrey M; Pankin, Pavel S; Svyakhovskiy, Sergey E; Timofeev, Ivan V; Vetrov, Stepan Ya

    2017-09-15

    We propose an elegant approach to produce photonic bandgap (PBG) structures with multiple photonic bandgaps by constructing quasiperiodic photonic crystals (QPPCs) composed of a superposition of photonic lattices with different periods. Generally, QPPC structures exhibit both aperiodicity and multiple PBGs due to their long-range order. They are described by a simple analytical expression, instead of quasiperiodic tiling approaches based on substitution rules. Here we describe the optical properties of QPPCs exhibiting two PBGs that can be tuned independently. PBG interband spacing and its depth can be varied by choosing appropriate reciprocal lattice vectors and their amplitudes. These effects are confirmed by the proof-of-concept measurements made for the porous silicon-based QPPC of the appropriate design.

  18. Illustrating dynamical symmetries in classical mechanics: The Laplace-Runge-Lenz vector revisited

    NASA Astrophysics Data System (ADS)

    O'Connell, Ross C.; Jagannathan, Kannan

    2003-03-01

    The inverse square force law admits a conserved vector that lies in the plane of motion. This vector has been associated with the names of Laplace, Runge, and Lenz, among others. Many workers have explored aspects of the symmetry and degeneracy associated with this vector and with analogous dynamical symmetries. We define a conserved dynamical variable α that characterizes the orientation of the orbit in two-dimensional configuration space for the Kepler problem and an analogous variable β for the isotropic harmonic oscillator. This orbit orientation variable is canonically conjugate to the angular momentum component normal to the plane of motion. We explore the canonical one-parameter group of transformations generated by α(β). Because we have an obvious pair of conserved canonically conjugate variables, it is desirable to use them as a coordinate-momentum pair. In terms of these phase space coordinates, the form of the Hamiltonian is nearly trivial because neither member of the pair can occur explicitly in the Hamiltonian. From these considerations we gain a simple picture of dynamics in phase space. The procedure we use is in the spirit of the Hamilton-Jacobi method.

  19. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3) this paper adopts the Gauss Elimination, one of the on-the-shelf techniques, to generate a basis of the original feature space, which is stable and efficient.

  20. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  1. Transcranial Magnetic Stimulation: An Automated Procedure to Obtain Coil-specific Models for Field Calculations.

    PubMed

    Madsen, Kristoffer H; Ewald, Lars; Siebner, Hartwig R; Thielscher, Axel

    2015-01-01

    Field calculations for transcranial magnetic stimulation (TMS) are increasingly implemented online in neuronavigation systems and in more realistic offline approaches based on finite-element methods. They are often based on simplified and/or non-validated models of the magnetic vector potential of the TMS coils. To develop an approach to reconstruct the magnetic vector potential based on automated measurements. We implemented a setup that simultaneously measures the three components of the magnetic field with high spatial resolution. This is complemented by a novel approach to determine the magnetic vector potential via volume integration of the measured field. The integration approach reproduces the vector potential with very good accuracy. The vector potential distribution of a standard figure-of-eight shaped coil determined with our setup corresponds well with that calculated using a model reconstructed from x-ray images. The setup can supply validated models for existing and newly appearing TMS coils. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction.

    PubMed

    Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc

    2017-11-01

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE PAGES

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...

    2017-07-03

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  4. Habitat suitability and ecological niche profile of major malaria vectors in Cameroon

    PubMed Central

    2009-01-01

    Background Suitability of environmental conditions determines a species distribution in space and time. Understanding and modelling the ecological niche of mosquito disease vectors can, therefore, be a powerful predictor of the risk of exposure to the pathogens they transmit. In Africa, five anophelines are responsible for over 95% of total malaria transmission. However, detailed knowledge of the geographic distribution and ecological requirements of these species is to date still inadequate. Methods Indoor-resting mosquitoes were sampled from 386 villages covering the full range of ecological settings available in Cameroon, Central Africa. Using a predictive species distribution modeling approach based only on presence records, habitat suitability maps were constructed for the five major malaria vectors Anopheles gambiae, Anopheles funestus, Anopheles arabiensis, Anopheles nili and Anopheles moucheti. The influence of 17 climatic, topographic, and land use variables on mosquito geographic distribution was assessed by multivariate regression and ordination techniques. Results Twenty-four anopheline species were collected, of which 17 are known to transmit malaria in Africa. Ecological Niche Factor Analysis, Habitat Suitability modeling and Canonical Correspondence Analysis revealed marked differences among the five major malaria vector species, both in terms of ecological requirements and niche breadth. Eco-geographical variables (EGVs) related to human activity had the highest impact on habitat suitability for the five major malaria vectors, with areas of low population density being of marginal or unsuitable habitat quality. Sunlight exposure, rainfall, evapo-transpiration, relative humidity, and wind speed were among the most discriminative EGVs separating "forest" from "savanna" species. Conclusions The distribution of major malaria vectors in Cameroon is strongly affected by the impact of humans on the environment, with variables related to proximity to human settings being among the best predictors of habitat suitability. The ecologically more tolerant species An. gambiae and An. funestus were recorded in a wide range of eco-climatic settings. The other three major vectors, An. arabiensis, An. moucheti, and An. nili, were more specialized. Ecological niche and species distribution modelling should help improve malaria vector control interventions by targeting places and times where the impact on vector populations and disease transmission can be optimized. PMID:20028559

  5. Habitat suitability and ecological niche profile of major malaria vectors in Cameroon.

    PubMed

    Ayala, Diego; Costantini, Carlo; Ose, Kenji; Kamdem, Guy C; Antonio-Nkondjio, Christophe; Agbor, Jean-Pierre; Awono-Ambene, Parfait; Fontenille, Didier; Simard, Frédéric

    2009-12-23

    Suitability of environmental conditions determines a species distribution in space and time. Understanding and modelling the ecological niche of mosquito disease vectors can, therefore, be a powerful predictor of the risk of exposure to the pathogens they transmit. In Africa, five anophelines are responsible for over 95% of total malaria transmission. However, detailed knowledge of the geographic distribution and ecological requirements of these species is to date still inadequate. Indoor-resting mosquitoes were sampled from 386 villages covering the full range of ecological settings available in Cameroon, Central Africa. Using a predictive species distribution modeling approach based only on presence records, habitat suitability maps were constructed for the five major malaria vectors Anopheles gambiae, Anopheles funestus, Anopheles arabiensis, Anopheles nili and Anopheles moucheti. The influence of 17 climatic, topographic, and land use variables on mosquito geographic distribution was assessed by multivariate regression and ordination techniques. Twenty-four anopheline species were collected, of which 17 are known to transmit malaria in Africa. Ecological Niche Factor Analysis, Habitat Suitability modeling and Canonical Correspondence Analysis revealed marked differences among the five major malaria vector species, both in terms of ecological requirements and niche breadth. Eco-geographical variables (EGVs) related to human activity had the highest impact on habitat suitability for the five major malaria vectors, with areas of low population density being of marginal or unsuitable habitat quality. Sunlight exposure, rainfall, evapo-transpiration, relative humidity, and wind speed were among the most discriminative EGVs separating "forest" from "savanna" species. The distribution of major malaria vectors in Cameroon is strongly affected by the impact of humans on the environment, with variables related to proximity to human settings being among the best predictors of habitat suitability. The ecologically more tolerant species An. gambiae and An. funestus were recorded in a wide range of eco-climatic settings. The other three major vectors, An. arabiensis, An. moucheti, and An. nili, were more specialized. Ecological niche and species distribution modelling should help improve malaria vector control interventions by targeting places and times where the impact on vector populations and disease transmission can be optimized.

  6. Generalized sidelobe canceller beamforming method for ultrasound imaging.

    PubMed

    Wang, Ping; Li, Na; Luo, Han-Wu; Zhu, Yong-Kun; Cui, Shi-Gang

    2017-03-01

    A modified generalized sidelobe canceller (IGSC) algorithm is proposed to enhance the resolution and robustness against the noise of the traditional generalized sidelobe canceller (GSC) and coherence factor combined method (GSC-CF). In the GSC algorithm, weighting vector is divided into adaptive and non-adaptive parts, while the non-adaptive part does not block all the desired signal. A modified steer vector of the IGSC algorithm is generated by the projection of the non-adaptive vector on the signal space constructed by the covariance matrix of received data. The blocking matrix is generated based on the orthogonal complementary space of the modified steer vector and the weighting vector is updated subsequently. The performance of IGSC was investigated by simulations and experiments. Through simulations, IGSC outperformed GSC-CF in terms of spatial resolution by 0.1 mm regardless there is noise or not, as well as the contrast ratio respect. The proposed IGSC can be further improved by combining with CF. The experimental results also validated the effectiveness of the proposed algorithm with dataset provided by the University of Michigan.

  7. Unitary Operators on the Document Space.

    ERIC Educational Resources Information Center

    Hoenkamp, Eduard

    2003-01-01

    Discusses latent semantic indexing (LSI) that would allow search engines to reduce the dimension of the document space by mapping it into a space spanned by conceptual indices. Topics include vector space models; singular value decomposition (SVD); unitary operators; the Haar transform; and new algorithms. (Author/LRW)

  8. Hypercyclic subspaces for Frechet space operators

    NASA Astrophysics Data System (ADS)

    Petersson, Henrik

    2006-07-01

    A continuous linear operator is hypercyclic if there is an such that the orbit {Tnx} is dense, and such a vector x is said to be hypercyclic for T. Recent progress show that it is possible to characterize Banach space operators that have a hypercyclic subspace, i.e., an infinite dimensional closed subspace of, except for zero, hypercyclic vectors. The following is known to hold: A Banach space operator T has a hypercyclic subspace if there is a sequence (ni) and an infinite dimensional closed subspace such that T is hereditarily hypercyclic for (ni) and Tni->0 pointwise on E. In this note we extend this result to the setting of Frechet spaces that admit a continuous norm, and study some applications for important function spaces. As an application we also prove that any infinite dimensional separable Frechet space with a continuous norm admits an operator with a hypercyclic subspace.

  9. A space-efficient quantum computer simulator suitable for high-speed FPGA implementation

    NASA Astrophysics Data System (ADS)

    Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel

    2009-05-01

    Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.

  10. Robust support vector regression networks for function approximation with outliers.

    PubMed

    Chuang, Chen-Chia; Su, Shun-Feng; Jeng, Jin-Tsong; Hsiao, Chih-Ching

    2002-01-01

    Support vector regression (SVR) employs the support vector machine (SVM) to tackle problems of function approximation and regression estimation. SVR has been shown to have good robust properties against noise. When the parameters used in SVR are improperly selected, overfitting phenomena may still occur. However, the selection of various parameters is not straightforward. Besides, in SVR, outliers may also possibly be taken as support vectors. Such an inclusion of outliers in support vectors may lead to seriously overfitting phenomena. In this paper, a novel regression approach, termed as the robust support vector regression (RSVR) network, is proposed to enhance the robust capability of SVR. In the approach, traditional robust learning approaches are employed to improve the learning performance for any selected parameters. From the simulation results, our RSVR can always improve the performance of the learned systems for all cases. Besides, it can be found that even the training lasted for a long period, the testing errors would not go up. In other words, the overfitting phenomenon is indeed suppressed.

  11. Global Transport Networks and Infectious Disease Spread

    PubMed Central

    Tatem, A.J.; Rogers, D.J.; Hay, S.I.

    2011-01-01

    Air, sea and land transport networks continue to expand in reach, speed of travel and volume of passengers and goods carried. Pathogens and their vectors can now move further, faster and in greater numbers than ever before. Three important consequences of global transport network expansion are infectious disease pandemics, vector invasion events and vector-borne pathogen importation. This review briefly examines some of the important historical examples of these disease and vector movements, such as the global influenza pandemics, the devastating Anopheles gambiae invasion of Brazil and the recent increases in imported Plasmodium falciparum malaria cases. We then outline potential approaches for future studies of disease movement, focussing on vector invasion and vector-borne disease importation. Such approaches allow us to explore the potential implications of international air travel, shipping routes and other methods of transport on global pathogen and vector traffic. PMID:16647974

  12. Left ventricular hypertrophy index based on a combination of frontal and transverse planes in the ECG and VCG: Diagnostic utility of cardiac vectors

    NASA Astrophysics Data System (ADS)

    Bonomini, Maria Paula; Juan Ingallina, Fernando; Barone, Valeria; Antonucci, Ricardo; Valentinuzzi, Max; Arini, Pedro David

    2016-04-01

    The changes that left ventricular hypertrophy (LVH) induces in depolarization and repolarization vectors are well known. We analyzed the performance of the electrocardiographic and vectorcardiographic transverse planes (TP in the ECG and XZ in the VCG) and frontal planes (FP in the ECG and XY in the VCG) to discriminate LVH patients from control subjects. In an age-balanced set of 58 patients, the directions and amplitudes of QRS-complexes and T-wave vectors were studied. The repolarization vector significantly decreased in modulus from controls to LVH in the transverse plane (TP: 0.45±0.17mV vs. 0.24±0.13mV, p<0.0005 XZ: 0.43±0.16mV vs. 0.26±0.11mV, p<0.005) while the depolarization vector significantly changed in angle in the electrocardiographic frontal plane (Controls vs. LVH, FP: 48.24±33.66° vs. 46.84±35.44°, p<0.005, XY: 20.28±35.20° vs. 19.35±12.31°, NS). Several LVH indexes were proposed combining such information in both ECG and VCG spaces. A subset of all those indexes with AUC values greater than 0.7 was further studied. This subset comprised four indexes, with three of them belonging to the ECG space. Two out of the four indexes presented the best ROC curves (AUC values: 0.78 and 0.75, respectively). One index belonged to the ECG space and the other one to the VCG space. Both indexes showed a sensitivity of 86% and a specificity of 70%. In conclusion, the proposed indexes can favorably complement LVH diagnosis

  13. 3D reconstruction of the optic nerve head using stereo fundus images for computer-aided diagnosis of glaucoma

    NASA Astrophysics Data System (ADS)

    Tang, Li; Kwon, Young H.; Alward, Wallace L. M.; Greenlee, Emily C.; Lee, Kyungmoo; Garvin, Mona K.; Abràmoff, Michael D.

    2010-03-01

    The shape of the optic nerve head (ONH) is reconstructed automatically using stereo fundus color images by a robust stereo matching algorithm, which is needed for a quantitative estimate of the amount of nerve fiber loss for patients with glaucoma. Compared to natural scene stereo, fundus images are noisy because of the limits on illumination conditions and imperfections of the optics of the eye, posing challenges to conventional stereo matching approaches. In this paper, multi scale pixel feature vectors which are robust to noise are formulated using a combination of both pixel intensity and gradient features in scale space. Feature vectors associated with potential correspondences are compared with a disparity based matching score. The deep structures of the optic disc are reconstructed with a stack of disparity estimates in scale space. Optical coherence tomography (OCT) data was collected at the same time, and depth information from 3D segmentation was registered with the stereo fundus images to provide the ground truth for performance evaluation. In experiments, the proposed algorithm produces estimates for the shape of the ONH that are close to the OCT based shape, and it shows great potential to help computer-aided diagnosis of glaucoma and other related retinal diseases.

  14. Pose Invariant Face Recognition Based on Hybrid Dominant Frequency Features

    NASA Astrophysics Data System (ADS)

    Wijaya, I. Gede Pasek Suta; Uchimura, Keiichi; Hu, Zhencheng

    Face recognition is one of the most active research areas in pattern recognition, not only because the face is a human biometric characteristics of human being but also because there are many potential applications of the face recognition which range from human-computer interactions to authentication, security, and surveillance. This paper presents an approach to pose invariant human face image recognition. The proposed scheme is based on the analysis of discrete cosine transforms (DCT) and discrete wavelet transforms (DWT) of face images. From both the DCT and DWT domain coefficients, which describe the facial information, we build compact and meaningful features vector, using simple statistical measures and quantization. This feature vector is called as the hybrid dominant frequency features. Then, we apply a combination of the L2 and Lq metric to classify the hybrid dominant frequency features to a person's class. The aim of the proposed system is to overcome the high memory space requirement, the high computational load, and the retraining problems of previous methods. The proposed system is tested using several face databases and the experimental results are compared to a well-known Eigenface method. The proposed method shows good performance, robustness, stability, and accuracy without requiring geometrical normalization. Furthermore, the purposed method has low computational cost, requires little memory space, and can overcome retraining problem.

  15. Application of Diagnostic Analysis Tools to the Ares I Thrust Vector Control System

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Melcher, Kevin J.; Chicatelli, Amy K.; Johnson, Stephen B.

    2010-01-01

    The NASA Ares I Crew Launch Vehicle is being designed to support missions to the International Space Station (ISS), to the Moon, and beyond. The Ares I is undergoing design and development utilizing commercial-off-the-shelf tools and hardware when applicable, along with cutting edge launch technologies and state-of-the-art design and development. In support of the vehicle s design and development, the Ares Functional Fault Analysis group was tasked to develop an Ares Vehicle Diagnostic Model (AVDM) and to demonstrate the capability of that model to support failure-related analyses and design integration. One important component of the AVDM is the Upper Stage (US) Thrust Vector Control (TVC) diagnostic model-a representation of the failure space of the US TVC subsystem. This paper first presents an overview of the AVDM, its development approach, and the software used to implement the model and conduct diagnostic analysis. It then uses the US TVC diagnostic model to illustrate details of the development, implementation, analysis, and verification processes. Finally, the paper describes how the AVDM model can impact both design and ground operations, and how some of these impacts are being realized during discussions of US TVC diagnostic analyses with US TVC designers.

  16. Closedness of orbits in a space with SU(2) Poisson structure

    NASA Astrophysics Data System (ADS)

    Fatollahi, Amir H.; Shariati, Ahmad; Khorrami, Mohammad

    2014-06-01

    The closedness of orbits of central forces is addressed in a three-dimensional space in which the Poisson bracket among the coordinates is that of the SU(2) Lie algebra. In particular it is shown that among problems with spherically symmetric potential energies, it is only the Kepler problem for which all bounded orbits are closed. In analogy with the case of the ordinary space, a conserved vector (apart from the angular momentum) is explicitly constructed, which is responsible for the orbits being closed. This is the analog of the Laplace-Runge-Lenz vector. The algebra of the constants of the motion is also worked out.

  17. Professor Herman Burger (1893-1965), eminent teacher and scientist, who laid the theoretical foundations of vectorcardiography--and electrocardiography.

    PubMed

    van Herpen, Gerard

    2014-01-01

    Einthoven not only designed a high quality instrument, the string galvanometer, for recording the ECG, he also shaped the conceptual framework to understand it. He reduced the body to an equilateral triangle and the cardiac electric activity to a dipole, represented by an arrow (i.e. a vector) in the triangle's center. Up to the present day the interpretation of the ECG is based on the model of a dipole vector being projected on the various leads. The model is practical but intuitive, not physically founded. Burger analysed the relation between heart vector and leads according to the principles of physics. It then follows that an ECG lead must be treated as a vector (lead vector) and that the lead voltage is not simply proportional to the projection of the vector on the lead, but must be multiplied by the value (length) of the lead vector, the lead strength. Anatomical lead axis and electrical lead axis are different entities and the anatomical body space must be distinguished from electrical space. Appreciation of these underlying physical principles should contribute to a better understanding of the ECG. The development of these principles by Burger is described, together with some personal notes and a sketch of the personality of this pioneer of medical physics. Copyright © 2014. Published by Elsevier Inc.

  18. GRASS GIS: The first Open Source Temporal GIS

    NASA Astrophysics Data System (ADS)

    Gebbert, Sören; Leppelt, Thomas

    2015-04-01

    GRASS GIS is a full featured, general purpose Open Source geographic information system (GIS) with raster, 3D raster and vector processing support[1]. Recently, time was introduced as a new dimension that transformed GRASS GIS into the first Open Source temporal GIS with comprehensive spatio-temporal analysis, processing and visualization capabilities[2]. New spatio-temporal data types were introduced in GRASS GIS version 7, to manage raster, 3D raster and vector time series. These new data types are called space time datasets. They are designed to efficiently handle hundreds of thousands of time stamped raster, 3D raster and vector map layers of any size. Time stamps can be defined as time intervals or time instances in Gregorian calendar time or relative time. Space time datasets are simplifying the processing and analysis of large time series in GRASS GIS, since these new data types are used as input and output parameter in temporal modules. The handling of space time datasets is therefore equal to the handling of raster, 3D raster and vector map layers in GRASS GIS. A new dedicated Python library, the GRASS GIS Temporal Framework, was designed to implement the spatio-temporal data types and their management. The framework provides the functionality to efficiently handle hundreds of thousands of time stamped map layers and their spatio-temporal topological relations. The framework supports reasoning based on the temporal granularity of space time datasets as well as their temporal topology. It was designed in conjunction with the PyGRASS [3] library to support parallel processing of large datasets, that has a long tradition in GRASS GIS [4,5]. We will present a subset of more than 40 temporal modules that were implemented based on the GRASS GIS Temporal Framework, PyGRASS and the GRASS GIS Python scripting library. These modules provide a comprehensive temporal GIS tool set. The functionality range from space time dataset and time stamped map layer management over temporal aggregation, temporal accumulation, spatio-temporal statistics, spatio-temporal sampling, temporal algebra, temporal topology analysis, time series animation and temporal topology visualization to time series import and export capabilities with support for NetCDF and VTK data formats. We will present several temporal modules that support parallel processing of raster and 3D raster time series. [1] GRASS GIS Open Source Approaches in Spatial Data Handling In Open Source Approaches in Spatial Data Handling, Vol. 2 (2008), pp. 171-199, doi:10.1007/978-3-540-74831-19 by M. Neteler, D. Beaudette, P. Cavallini, L. Lami, J. Cepicky edited by G. Brent Hall, Michael G. Leahy [2] Gebbert, S., Pebesma, E., 2014. A temporal GIS for field based environmental modeling. Environ. Model. Softw. 53, 1-12. [3] Zambelli, P., Gebbert, S., Ciolli, M., 2013. Pygrass: An Object Oriented Python Application Programming Interface (API) for Geographic Resources Analysis Support System (GRASS) Geographic Information System (GIS). ISPRS Intl Journal of Geo-Information 2, 201-219. [4] Löwe, P., Klump, J., Thaler, J. (2012): The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster, (Geophysical Research Abstracts Vol. 14, EGU2012-4491, 2012), General Assembly European Geosciences Union (Vienna, Austria 2012). [5] Akhter, S., Aida, K., Chemin, Y., 2010. "GRASS GIS on High Performance Computing with MPI, OpenMP and Ninf-G Programming Framework". ISPRS Conference, Kyoto, 9-12 August 2010

  19. Vector representation of lithium and other mica compositions

    NASA Technical Reports Server (NTRS)

    Burt, Donald M.

    1991-01-01

    In contrast to mathematics, where a vector of one component defines a line, in chemical petrology a one-component system is a point, and two components are needed to define a line, three for a plane, and four for a space. Here, an attempt is made to show how these differences in the definition of a component can be resolved, with lithium micas used as an example. In particular, the condensed composition space theoretically accessible to Li-Fe-Al micas is shown to be an irregular three-dimensional polyhedron, rather than the triangle Al(3+)-Fe(2+)-Li(+), used by some researchers. This result is demonstrated starting with the annite composition and using exchange operators graphically as vectors that generate all of the other mica compositions.

  20. Derivation of formulas for root-mean-square errors in location, orientation, and shape in triangulation solution of an elongated object in space

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1974-01-01

    Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.

  1. Covariance estimation in Terms of Stokes Parameters with Application to Vector Sensor Imaging

    DTIC Science & Technology

    2016-12-15

    S. Klein, “HF Vector Sensor for Radio Astronomy : Ground Testing Results,” in AIAA SPACE 2016, ser. AIAA SPACE Forum, American Institute of... astronomy ,” in 2016 IEEE Aerospace Conference, Mar. 2016, pp. 1–17. doi: 10.1109/ AERO.2016.7500688. [4] K.-C. Ho, K.-C. Tan, and B. T. G. Tan, “Estimation of...Statistical Imaging in Radio Astronomy via an Expectation-Maximization Algorithm for Structured Covariance Estimation,” in Statistical Methods in Imaging: IN

  2. Lie theory and control systems defined on spheres

    NASA Technical Reports Server (NTRS)

    Brockett, R. W.

    1972-01-01

    It is shown that in constructing a theory for the most elementary class of control problems defined on spheres, some results from the Lie theory play a natural role. To understand controllability, optimal control, and certain properties of stochastic equations, Lie theoretic ideas are needed. The framework considered here is the most natural departure from the usual linear system/vector space problems which have dominated control systems literature. For this reason results are compared with those previously available for the finite dimensional vector space case.

  3. Space Object Classification Using Fused Features of Time Series Data

    NASA Astrophysics Data System (ADS)

    Jia, B.; Pham, K. D.; Blasch, E.; Shen, D.; Wang, Z.; Chen, G.

    In this paper, a fused feature vector consisting of raw time series and texture feature information is proposed for space object classification. The time series data includes historical orbit trajectories and asteroid light curves. The texture feature is derived from recurrence plots using Gabor filters for both unsupervised learning and supervised learning algorithms. The simulation results show that the classification algorithms using the fused feature vector achieve better performance than those using raw time series or texture features only.

  4. Secure coherent optical multi-carrier system with four-dimensional modulation space and Stokes vector scrambling.

    PubMed

    Zhang, Lijia; Liu, Bo; Xin, Xiangjun

    2015-06-15

    A secure enhanced coherent optical multi-carrier system based on Stokes vector scrambling is proposed and experimentally demonstrated. The optical signal with four-dimensional (4D) modulation space has been scrambled intra- and inter-subcarriers, where a multi-layer logistic map is adopted as the chaotic model. An experiment with 61.71-Gb/s encrypted multi-carrier signal is successfully demonstrated with the proposed method. The results indicate a promising solution for the physical secure optical communication.

  5. Using trees to compute approximate solutions to ordinary differential equations exactly

    NASA Technical Reports Server (NTRS)

    Grossman, Robert

    1991-01-01

    Some recent work is reviewed which relates families of trees to symbolic algorithms for the exact computation of series which approximate solutions of ordinary differential equations. It turns out that the vector space whose basis is the set of finite, rooted trees carries a natural multiplication related to the composition of differential operators, making the space of trees an algebra. This algebraic structure can be exploited to yield a variety of algorithms for manipulating vector fields and the series and algebras they generate.

  6. Local Gram-Schmidt and covariant Lyapunov vectors and exponents for three harmonic oscillator problems

    NASA Astrophysics Data System (ADS)

    Hoover, Wm. G.; Hoover, Carol G.

    2012-02-01

    We compare the Gram-Schmidt and covariant phase-space-basis-vector descriptions for three time-reversible harmonic oscillator problems, in two, three, and four phase-space dimensions respectively. The two-dimensional problem can be solved analytically. The three-dimensional and four-dimensional problems studied here are simultaneously chaotic, time-reversible, and dissipative. Our treatment is intended to be pedagogical, for use in an updated version of our book on Time Reversibility, Computer Simulation, and Chaos. Comments are very welcome.

  7. Sensitivity analysis of the space shuttle to ascent wind profiles

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Austin, L. D., Jr.

    1982-01-01

    A parametric sensitivity analysis of the space shuttle ascent flight to the wind profile is presented. Engineering systems parameters are obtained by flight simulations using wind profile models and samples of detailed (Jimsphere) wind profile measurements. The wind models used are the synthetic vector wind model, with and without the design gust, and a model of the vector wind change with respect to time. From these comparison analyses an insight is gained on the contribution of winds to ascent subsystems flight parameters.

  8. Spatiotemporal image correlation spectroscopy (STICS) theory, verification, and application to protein velocity mapping in living CHO cells.

    PubMed

    Hebert, Benedict; Costantino, Santiago; Wiseman, Paul W

    2005-05-01

    We introduce a new extension of image correlation spectroscopy (ICS) and image cross-correlation spectroscopy (ICCS) that relies on complete analysis of both the temporal and spatial correlation lags for intensity fluctuations from a laser-scanning microscopy image series. This new approach allows measurement of both diffusion coefficients and velocity vectors (magnitude and direction) for fluorescently labeled membrane proteins in living cells through monitoring of the time evolution of the full space-time correlation function. By using filtering in Fourier space to remove frequencies associated with immobile components, we are able to measure the protein transport even in the presence of a large fraction (>90%) of immobile species. We present the background theory, computer simulations, and analysis of measurements on fluorescent microspheres to demonstrate proof of principle, capabilities, and limitations of the method. We demonstrate mapping of flow vectors for mixed samples containing fluorescent microspheres with different emission wavelengths using space time image cross-correlation. We also present results from two-photon laser-scanning microscopy studies of alpha-actinin/enhanced green fluorescent protein fusion constructs at the basal membrane of living CHO cells. Using space-time image correlation spectroscopy (STICS), we are able to measure protein fluxes with magnitudes of mum/min from retracting lamellar regions and protrusions for adherent cells. We also demonstrate the measurement of correlated directed flows (magnitudes of mum/min) and diffusion of interacting alpha5 integrin/enhanced cyan fluorescent protein and alpha-actinin/enhanced yellow fluorescent protein within living CHO cells. The STICS method permits us to generate complete transport maps of proteins within subregions of the basal membrane even if the protein concentration is too high to perform single particle tracking measurements.

  9. Computational Methods for Frictional Contact With Applications to the Space Shuttle Orbiter Nose-Gear Tire

    NASA Technical Reports Server (NTRS)

    Tanner, John A.

    1996-01-01

    A computational procedure is presented for the solution of frictional contact problems for aircraft tires. A Space Shuttle nose-gear tire is modeled using a two-dimensional laminated anisotropic shell theory which includes the effects of variations in material and geometric parameters, transverse-shear deformation, and geometric nonlinearities. Contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with both contact and friction conditions. The contact-friction algorithm is based on a modified Coulomb friction law. A modified two-field, mixed-variational principle is used to obtain elemental arrays. This modification consists of augmenting the functional of that principle by two terms: the Lagrange multiplier vector associated with normal and tangential node contact-load intensities and a regularization term that is quadratic in the Lagrange multiplier vector. These capabilities and computational features are incorporated into an in-house computer code. Experimental measurements were taken to define the response of the Space Shuttle nose-gear tire to inflation-pressure loads and to inflation-pressure loads combined with normal static loads against a rigid flat plate. These experimental results describe the meridional growth of the tire cross section caused by inflation loading, the static load-deflection characteristics of the tire, the geometry of the tire footprint under static loading conditions, and the normal and tangential load-intensity distributions in the tire footprint for the various static vertical loading conditions. Numerical results were obtained for the Space Shuttle nose-gear tire subjected to inflation pressure loads and combined inflation pressure and contact loads against a rigid flat plate. The experimental measurements and the numerical results are compared.

  10. Computational methods for frictional contact with applications to the Space Shuttle orbiter nose-gear tire: Comparisons of experimental measurements and analytical predictions

    NASA Technical Reports Server (NTRS)

    Tanner, John A.

    1996-01-01

    A computational procedure is presented for the solution of frictional contact problems for aircraft tires. A Space Shuttle nose-gear tire is modeled using a two-dimensional laminated anisotropic shell theory which includes the effects of variations in material and geometric parameters, transverse-shear deformation, and geometric nonlinearities. Contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with both contact and friction conditions. The contact-friction algorithm is based on a modified Coulomb friction law. A modified two-field, mixed-variational principle is used to obtain elemental arrays. This modification consists of augmenting the functional of that principle by two terms: the Lagrange multiplier vector associated with normal and tangential node contact-load intensities and a regularization term that is quadratic in the Lagrange multiplier vector. These capabilities and computational features are incorporated into an in-house computer code. Experimental measurements were taken to define the response of the Space Shuttle nose-gear tire to inflation-pressure loads and to inflation-pressure loads combined with normal static loads against a rigid flat plate. These experimental results describe the meridional growth of the tire cross section caused by inflation loading, the static load-deflection characteristics of the tire, the geometry of the tire footprint under static loading conditions, and the normal and tangential load-intensity distributions in the tire footprint for the various static vertical-loading conditions. Numerical results were obtained for the Space Shuttle nose-gear tire subjected to inflation pressure loads and combined inflation pressure and contact loads against a rigid flat plate. The experimental measurements and the numerical results are compared.

  11. Space Weather Impacts to Conjunction Assessment: A NASA Robotic Orbital Safety Perspective

    NASA Technical Reports Server (NTRS)

    Ghrist, Richard; Ghrist, Richard; DeHart, Russel; Newman, Lauri

    2013-01-01

    National Aeronautics and Space Administration (NASA) recognizes the risk of on-orbit collisions from other satellites and debris objects and has instituted a process to identify and react to close approaches. The charter of the NASA Robotic Conjunction Assessment Risk Analysis (CARA) task is to protect NASA robotic (unmanned) assets from threats posed by other space objects. Monitoring for potential collisions requires formulating close-approach predictions a week or more in the future to determine analyze, and respond to orbital conjunction events of interest. These predictions require propagation of the latest state vector and covariance assuming a predicted atmospheric density and ballistic coefficient. Any differences between the predicted drag used for propagation and the actual drag experienced by the space objects can potentially affect the conjunction event. Therefore, the space environment itself, in particular how space weather impacts atmospheric drag, is an essential element to understand in order effectively to assess the risk of conjunction events. The focus of this research is to develop a better understanding of the impact of space weather on conjunction assessment activities: both accurately determining the current risk and assessing how that risk may change under dynamic space weather conditions. We are engaged in a data-- ]mining exercise to corroborate whether or not observed changes in a conjunction event's dynamics appear consistent with space weather changes and are interested in developing a framework to respond appropriately to uncertainty in predicted space weather. In particular, we use historical conjunction event data products to search for dynamical effects on satellite orbits from changing atmospheric drag. Increased drag is expected to lower the satellite specific energy and will result in the satellite's being 'later' than expected, which can affect satellite conjunctions in a number of ways depending on the two satellites' orbits and the geometry of the conjunction. These satellite time offsets can form the basis of a new technique under development to determine whether space weather perturbations, such as coronal mass ejections, are likely to increase, decrease, or have a neutral effect on the collision risk due to a particular close approach.

  12. Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports

    DOE PAGES

    Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.; ...

    2017-05-03

    Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less

  13. Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.

    Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less

  14. Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System

    PubMed Central

    Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei

    2018-01-01

    The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751

  15. Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.

    PubMed

    Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao

    2017-06-21

    In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.

  16. Recent Developments In Theory Of Balanced Linear Systems

    NASA Technical Reports Server (NTRS)

    Gawronski, Wodek

    1994-01-01

    Report presents theoretical study of some issues of controllability and observability of system represented by linear, time-invariant mathematical model of the form. x = Ax + Bu, y = Cx + Du, x(0) = xo where x is n-dimensional vector representing state of system; u is p-dimensional vector representing control input to system; y is q-dimensional vector representing output of system; n,p, and q are integers; x(0) is intial (zero-time) state vector; and set of matrices (A,B,C,D) said to constitute state-space representation of system.

  17. A plane-polar approach for far-field construction from near-field measurements. [of large space-craft antennas

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.; Galindo-Israel, V.; Mittra, R.

    1980-01-01

    The planar configuration with a probe scanning a polar geometry is discussed with reference to its usefulness in the determination of a far field from near-field measurements. The accuracy of the method is verified numerically, using the concept of probe compensation as a vector deconvolution. Advantages of the Jacobi-Bessel series over the fast Fourier transforms for the plane-polar geometry are demonstrated. Finally, the far-field pattern of the Viking high gain antenna is constructed from the plane-polar near-field measured data and compared with the previously measured far-field pattern.

  18. Calculus of nonrigid surfaces for geometry and texture manipulation.

    PubMed

    Bronstein, Alexander; Bronstein, Michael; Kimmel, Ron

    2007-01-01

    We present a geometric framework for automatically finding intrinsic correspondence between three-dimensional nonrigid objects. We model object deformation as near isometries and find the correspondence as the minimum-distortion mapping. A generalization of multidimensional scaling is used as the numerical core of our approach. As a result, we obtain the possibility to manipulate the extrinsic geometry and the texture of the objects as vectors in a linear space. We demonstrate our method on the problems of expression-invariant texture mapping onto an animated three-dimensional face, expression exaggeration, morphing between faces, and virtual body painting.

  19. Novel MSVPWM to reduce the inductor current ripple for Z-source inverter in electric vehicle applications.

    PubMed

    Zhang, Qianfan; Dong, Shuai; Xue, Ping; Zhou, Chaowei; Cheng, ShuKang

    2014-01-01

    A novel modified space vector pulse width modulation (MSVPWM) strategy for Z-Source inverter is presented. By rearranging the position of shoot-through states, the frequency of inductor current ripple is kept constant. Compared with existing MSVPWM strategies, the proposed approach can reduce the maximum inductor current ripple. So the volume of Z-source network inductor can be designed smaller, which brings the beneficial effect on the miniaturization of the electric vehicle controller. Theoretical findings in the novel MSVPWM for Z-Source inverter have been verified by experiment results.

  20. Novel MSVPWM to Reduce the Inductor Current Ripple for Z-Source Inverter in Electric Vehicle Applications

    PubMed Central

    Zhang, Qianfan; Dong, Shuai; Xue, Ping; Zhou, Chaowei; Cheng, ShuKang

    2014-01-01

    A novel modified space vector pulse width modulation (MSVPWM) strategy for Z-Source inverter is presented. By rearranging the position of shoot-through states, the frequency of inductor current ripple is kept constant. Compared with existing MSVPWM strategies, the proposed approach can reduce the maximum inductor current ripple. So the volume of Z-source network inductor can be designed smaller, which brings the beneficial effect on the miniaturization of the electric vehicle controller. Theoretical findings in the novel MSVPWM for Z-Source inverter have been verified by experiment results. PMID:24883412

  1. Word-level recognition of multifont Arabic text using a feature vector matching approach

    NASA Astrophysics Data System (ADS)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  2. SVPWM Technique with Varying DC-Link Voltage for Common Mode Voltage Reduction in a Matrix Converter and Analytical Estimation of its Output Voltage Distortion

    NASA Astrophysics Data System (ADS)

    Padhee, Varsha

    Common Mode Voltage (CMV) in any power converter has been the major contributor to premature motor failures, bearing deterioration, shaft voltage build up and electromagnetic interference. Intelligent control methods like Space Vector Pulse Width Modulation (SVPWM) techniques provide immense potential and flexibility to reduce CMV, thereby targeting all the afore mentioned problems. Other solutions like passive filters, shielded cables and EMI filters add to the volume and cost metrics of the entire system. Smart SVPWM techniques therefore, come with a very important advantage of being an economical solution. This thesis discusses a modified space vector technique applied to an Indirect Matrix Converter (IMC) which results in the reduction of common mode voltages and other advanced features. The conventional indirect space vector pulse-width modulation (SVPWM) method of controlling matrix converters involves the usage of two adjacent active vectors and one zero vector for both rectifying and inverting stages of the converter. By suitable selection of space vectors, the rectifying stage of the matrix converter can generate different levels of virtual DC-link voltage. This capability can be exploited for operation of the converter in different ranges of modulation indices for varying machine speeds. This results in lower common mode voltage and improves the harmonic spectrum of the output voltage, without increasing the number of switching transitions as compared to conventional modulation. To summarize it can be said that the responsibility of formulating output voltages with a particular magnitude and frequency has been transferred solely to the rectifying stage of the IMC. Estimation of degree of distortion in the three phase output voltage is another facet discussed in this thesis. An understanding of the SVPWM technique and the switching sequence of the space vectors in detail gives the potential to estimate the RMS value of the switched output voltage of any converter. This conceivably aids the sizing and design of output passive filters. An analytical estimation method has been presented to achieve this purpose for am IMC. Knowledge of the fundamental component in output voltage can be utilized to calculate its Total Harmonic Distortion (THD). The effectiveness of the proposed SVPWM algorithms and the analytical estimation technique is substantiated by simulations in MATLAB / Simulink and experiments on a laboratory prototype of the IMC. Proper comparison plots have been provided to contrast the performance of the proposed methods with the conventional SVPWM method. The behavior of output voltage distortion and CMV with variation in operating parameters like modulation index and output frequency has also been analyzed.

  3. Implementation of the Orbital Maneuvering Systems Engine and Thrust Vector Control for the European Service Module

    NASA Technical Reports Server (NTRS)

    Millard, Jon

    2014-01-01

    The European Space Agency (ESA) has entered into a partnership with the National Aeronautics and Space Administration (NASA) to develop and provide the Service Module (SM) for the Orion Multipurpose Crew Vehicle (MPCV) Program. The European Service Module (ESM) will provide main engine thrust by utilizing the Space Shuttle Program Orbital Maneuvering System Engine (OMS-E). Thrust Vector Control (TVC) of the OMS-E will be provided by the Orbital Maneuvering System (OMS) TVC, also used during the Space Shuttle Program. NASA will be providing the OMS-E and OMS TVC to ESA as Government Furnished Equipment (GFE) to integrate into the ESM. This presentation will describe the OMS-E and OMS TVC and discuss the implementation of the hardware for the ESM.

  4. Development of a Gravid Trap for Collecting Live Malaria Vectors Anopheles gambiae s.l.

    PubMed Central

    Dugassa, Sisay; Lindh, Jenny M.; Oyieke, Florence; Mukabana, Wolfgang R.; Lindsay, Steven W.; Fillinger, Ulrike

    2013-01-01

    Background Effective malaria vector control targeting indoor host-seeking mosquitoes has resulted in fewer vectors entering houses in many areas of sub-Saharan Africa, with the proportion of vectors outdoors becoming more important in the transmission of this disease. This study aimed to develop a gravid trap for the outdoor collection of the malaria vector Anopheles gambiae s.l. based on evaluation and modification of commercially available gravid traps. Methods Experiments were implemented in an 80 m2 semi-field system where 200 gravid Anopheles gambiae s.s. were released nightly. The efficacy of the Box, CDC and Frommer updraft gravid traps was compared. The Box gravid trap was tested to determine if the presence of the trap over water and the trap’s sound affected catch size. Mosquitoes approaching the treatment were evaluated using electrocuting nets or detergents added to the water in the trap. Based on the results, a new gravid trap (OviART trap) that provided an open, unobstructed oviposition site was developed and evaluated. Results Box and CDC gravid traps collected similar numbers (relative rate (RR) 0.8, 95% confidence interval (CI) 0.6–1.2; p = 0.284), whereas the Frommer trap caught 70% fewer mosquitoes (RR 0.3, 95% CI 0.2–0.5; p < 0.001). The number of mosquitoes approaching the Box trap was significantly reduced when the trap was positioned over a water-filled basin compared to an open pond (RR 0.7 95% CI 0.6–0.7; p < 0.001). This effect was not due to the sound of the trap. Catch size increased by 60% (RR 1.6, 1.2–2.2; p = 0.001) with the new OviART trap. Conclusion Gravid An. Gambiae s.s. females were visually deterred by the presence of the trapping device directly over the oviposition medium. Based on these investigations, an effective gravid trap was developed that provides open landing space for egg-laying Anopheles . PMID:23861952

  5. Final Technical Report: Distributed Controls for High Penetrations of Renewables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrne, Raymond H.; Neely, Jason C.; Rashkin, Lee J.

    2015-12-01

    The goal of this effort was to apply four potential control analysis/design approaches to the design of distributed grid control systems to address the impact of latency and communications uncertainty with high penetrations of photovoltaic (PV) generation. The four techniques considered were: optimal fixed structure control; Nyquist stability criterion; vector Lyapunov analysis; and Hamiltonian design methods. A reduced order model of the Western Electricity Coordinating Council (WECC) developed for the Matlab Power Systems Toolbox (PST) was employed for the study, as well as representative smaller systems (e.g., a two-area, three-area, and four-area power system). Excellent results were obtained with themore » optimal fixed structure approach, and the methodology we developed was published in a journal article. This approach is promising because it offers a method for designing optimal control systems with the feedback signals available from Phasor Measurement Unit (PMU) data as opposed to full state feedback or the design of an observer. The Nyquist approach inherently handles time delay and incorporates performance guarantees (e.g., gain and phase margin). We developed a technique that works for moderate sized systems, but the approach does not scale well to extremely large system because of computational complexity. The vector Lyapunov approach was applied to a two area model to demonstrate the utility for modeling communications uncertainty. Application to large power systems requires a method to automatically expand/contract the state space and partition the system so that communications uncertainty can be considered. The Hamiltonian Surface Shaping and Power Flow Control (HSSPFC) design methodology was selected to investigate grid systems for energy storage requirements to support high penetration of variable or stochastic generation (such as wind and PV) and loads. This method was applied to several small system models.« less

  6. Multilevel Space-Time Aggregation for Bright Field Cell Microscopy Segmentation and Tracking

    PubMed Central

    Inglis, Tiffany; De Sterck, Hans; Sanders, Geoffrey; Djambazian, Haig; Sladek, Robert; Sundararajan, Saravanan; Hudson, Thomas J.

    2010-01-01

    A multilevel aggregation method is applied to the problem of segmenting live cell bright field microscope images. The method employed is a variant of the so-called “Segmentation by Weighted Aggregation” technique, which itself is based on Algebraic Multigrid methods. The variant of the method used is described in detail, and it is explained how it is tailored to the application at hand. In particular, a new scale-invariant “saliency measure” is proposed for deciding when aggregates of pixels constitute salient segments that should not be grouped further. It is shown how segmentation based on multilevel intensity similarity alone does not lead to satisfactory results for bright field cells. However, the addition of multilevel intensity variance (as a measure of texture) to the feature vector of each aggregate leads to correct cell segmentation. Preliminary results are presented for applying the multilevel aggregation algorithm in space time to temporal sequences of microscope images, with the goal of obtaining space-time segments (“object tunnels”) that track individual cells. The advantages and drawbacks of the space-time aggregation approach for segmentation and tracking of live cells in sequences of bright field microscope images are presented, along with a discussion on how this approach may be used in the future work as a building block in a complete and robust segmentation and tracking system. PMID:20467468

  7. Polarization Control with Plasmonic Antenna Tips: A Universal Approach to Optical Nanocrystallography and Vector-Field Imaging

    NASA Astrophysics Data System (ADS)

    Park, Kyoung-Duck; Raschke, Markus B.

    2018-05-01

    Controlling the propagation and polarization vectors in linear and nonlinear optical spectroscopy enables to probe the anisotropy of optical responses providing structural symmetry selective contrast in optical imaging. Here we present a novel tilted antenna-tip approach to control the optical vector-field by breaking the axial symmetry of the nano-probe in tip-enhanced near-field microscopy. This gives rise to a localized plasmonic antenna effect with significantly enhanced optical field vectors with control of both \\textit{in-plane} and \\textit{out-of-plane} components. We use the resulting vector-field specificity in the symmetry selective nonlinear optical response of second-harmonic generation (SHG) for a generalized approach to optical nano-crystallography and -imaging. In tip-enhanced SHG imaging of monolayer MoS$_2$ films and single-crystalline ferroelectric YMnO$_3$, we reveal nano-crystallographic details of domain boundaries and domain topology with enhanced sensitivity and nanoscale spatial resolution. The approach is applicable to any anisotropic linear and nonlinear optical response, and provides for optical nano-crystallographic imaging of molecular or quantum materials.

  8. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing

    PubMed Central

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246

  9. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.

    PubMed

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.

  10. Cosmology in generalized Proca theories

    NASA Astrophysics Data System (ADS)

    De Felice, Antonio; Heisenberg, Lavinia; Kase, Ryotaro; Mukohyama, Shinji; Tsujikawa, Shinji; Zhang, Ying-li

    2016-06-01

    We consider a massive vector field with derivative interactions that propagates only the 3 desired polarizations (besides two tensor polarizations from gravity) with second-order equations of motion in curved space-time. The cosmological implications of such generalized Proca theories are investigated for both the background and the linear perturbation by taking into account the Lagrangian up to quintic order. In the presence of a matter fluid with a temporal component of the vector field, we derive the background equations of motion and show the existence of de Sitter solutions relevant to the late-time cosmic acceleration. We also obtain conditions for the absence of ghosts and Laplacian instabilities of tensor, vector, and scalar perturbations in the small-scale limit. Our results are applied to concrete examples of the general functions in the theory, which encompass vector Galileons as a specific case. In such examples, we show that the de Sitter fixed point is always a stable attractor and study viable parameter spaces in which the no-ghost and stability conditions are satisfied during the cosmic expansion history.

  11. GNSS Single Frequency, Single Epoch Reliable Attitude Determination Method with Baseline Vector Constraint.

    PubMed

    Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong

    2015-12-02

    For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.

  12. Efficient modeling of vector hysteresis using a novel Hopfield neural network implementation of Stoner–Wohlfarth-like operators

    PubMed Central

    Adly, Amr A.; Abd-El-Hafiz, Salwa K.

    2012-01-01

    Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446

  13. Versatile generation of optical vector fields and vector beams using a non-interferometric approach.

    PubMed

    Tripathi, Santosh; Toussaint, Kimani C

    2012-05-07

    We present a versatile, non-interferometric method for generating vector fields and vector beams which can produce all the states of polarization represented on a higher-order Poincaré sphere. The versatility and non-interferometric nature of this method is expected to enable exploration of various exotic properties of vector fields and vector beams. To illustrate this, we study the propagation properties of some vector fields and find that, in general, propagation alters both their intensity and polarization distribution, and more interestingly, converts some vector fields into vector beams. In the article, we also suggest a modified Jones vector formalism to represent vector fields and vector beams.

  14. A concept-based interactive biomedical image retrieval approach using visualness and spatial information

    NASA Astrophysics Data System (ADS)

    Rahman, Md M.; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.

    2015-03-01

    This paper presents a novel approach to biomedical image retrieval by mapping image regions to local concepts and represent images in a weighted entropy-based concept feature space. The term concept refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist user in interactively select a Region-Of-Interest (ROI) and search for similar image ROIs. Further, a spatial verification step is used as a post-processing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval, is validated through experiments on a data set of 450 lung CT images extracted from journal articles from four different collections.

  15. Dictionary Learning on the Manifold of Square Root Densities and Application to Reconstruction of Diffusion Propagator Fields*

    PubMed Central

    Sun, Jiaqi; Xie, Yuchen; Ye, Wenxing; Ho, Jeffrey; Entezari, Alireza; Blackband, Stephen J.

    2013-01-01

    In this paper, we present a novel dictionary learning framework for data lying on the manifold of square root densities and apply it to the reconstruction of diffusion propagator (DP) fields given a multi-shell diffusion MRI data set. Unlike most of the existing dictionary learning algorithms which rely on the assumption that the data points are vectors in some Euclidean space, our dictionary learning algorithm is designed to incorporate the intrinsic geometric structure of manifolds and performs better than traditional dictionary learning approaches when applied to data lying on the manifold of square root densities. Non-negativity as well as smoothness across the whole field of the reconstructed DPs is guaranteed in our approach. We demonstrate the advantage of our approach by comparing it with an existing dictionary based reconstruction method on synthetic and real multi-shell MRI data. PMID:24684004

  16. From direct-space discrepancy functions to crystallographic least squares.

    PubMed

    Giacovazzo, Carmelo

    2015-01-01

    Crystallographic least squares are a fundamental tool for crystal structure analysis. In this paper their properties are derived from functions estimating the degree of similarity between two electron-density maps. The new approach leads also to modifications of the standard least-squares procedures, potentially able to improve their efficiency. The role of the scaling factor between observed and model amplitudes is analysed: the concept of unlocated model is discussed and its scattering contribution is combined with that arising from the located model. Also, the possible use of an ancillary parameter, to be associated with the classical weight related to the variance of the observed amplitudes, is studied. The crystallographic discrepancy factors, basic tools often combined with least-squares procedures in phasing approaches, are analysed. The mathematical approach here described includes, as a special case, the so-called vector refinement, used when accurate estimates of the target phases are available.

  17. Super Normal Vector for Human Activity Recognition with Depth Cameras.

    PubMed

    Yang, Xiaodong; Tian, YingLi

    2017-05-01

    The advent of cost-effectiveness and easy-operation depth cameras has facilitated a variety of visual recognition tasks including human activity recognition. This paper presents a novel framework for recognizing human activities from video sequences captured by depth cameras. We extend the surface normal to polynormal by assembling local neighboring hypersurface normals from a depth sequence to jointly characterize local motion and shape information. We then propose a general scheme of super normal vector (SNV) to aggregate the low-level polynormals into a discriminative representation, which can be viewed as a simplified version of the Fisher kernel representation. In order to globally capture the spatial layout and temporal order, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time cells. In the extensive experiments, the proposed approach achieves superior performance to the state-of-the-art methods on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D.

  18. Active relearning for robust supervised classification of pulmonary emphysema

    NASA Astrophysics Data System (ADS)

    Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Radiologists are adept at recognizing the appearance of lung parenchymal abnormalities in CT scans. However, the inconsistent differential diagnosis, due to subjective aggregation, mandates supervised classification. Towards optimizing Emphysema classification, we introduce a physician-in-the-loop feedback approach in order to minimize uncertainty in the selected training samples. Using multi-view inductive learning with the training samples, an ensemble of Support Vector Machine (SVM) models, each based on a specific pair-wise dissimilarity metric, was constructed in less than six seconds. In the active relearning phase, the ensemble-expert label conflicts were resolved by an expert. This just-in-time feedback with unoptimized SVMs yielded 15% increase in classification accuracy and 25% reduction in the number of support vectors. The generality of relearning was assessed in the optimized parameter space of six different classifiers across seven dissimilarity metrics. The resultant average accuracy improved to 21%. The co-operative feedback method proposed here could enhance both diagnostic and staging throughput efficiency in chest radiology practice.

  19. Dynamical Causal Modeling from a Quantum Dynamical Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demiralp, Emre; Demiralp, Metin

    Recent research suggests that any set of first order linear vector ODEs can be converted to a set of specific vector ODEs adhering to what we have called ''Quantum Harmonical Form (QHF)''. QHF has been developed using a virtual quantum multi harmonic oscillator system where mass and force constants are considered to be time variant and the Hamiltonian is defined as a conic structure over positions and momenta to conserve the Hermiticity. As described in previous works, the conversion to QHF requires the matrix coefficient of the first set of ODEs to be a normal matrix. In this paper, thismore » limitation is circumvented using a space extension approach expanding the potential applicability of this method. Overall, conversion to QHF allows the investigation of a set of ODEs using mathematical tools available to the investigation of the physical concepts underlying quantum harmonic oscillators. The utility of QHF in the context of dynamical systems and dynamical causal modeling in behavioral and cognitive neuroscience is briefly discussed.« less

  20. Tuning support vector machines for minimax and Neyman-Pearson classification.

    PubMed

    Davenport, Mark A; Baraniuk, Richard G; Scott, Clayton D

    2010-10-01

    This paper studies the training of support vector machine (SVM) classifiers with respect to the minimax and Neyman-Pearson criteria. In principle, these criteria can be optimized in a straightforward way using a cost-sensitive SVM. In practice, however, because these criteria require especially accurate error estimation, standard techniques for tuning SVM parameters, such as cross-validation, can lead to poor classifier performance. To address this issue, we first prove that the usual cost-sensitive SVM, here called the 2C-SVM, is equivalent to another formulation called the 2nu-SVM. We then exploit a characterization of the 2nu-SVM parameter space to develop a simple yet powerful approach to error estimation based on smoothing. In an extensive experimental study, we demonstrate that smoothing significantly improves the accuracy of cross-validation error estimates, leading to dramatic performance gains. Furthermore, we propose coordinate descent strategies that offer significant gains in computational efficiency, with little to no loss in performance.

  1. Nozzle Side Load Testing and Analysis at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph H.; McDaniels, David M.; Brown, Andrew M.

    2009-01-01

    Realistic estimates of nozzle side loads, the off-axis forces that develop during engine start and shutdown, are important in the design cycle of a rocket engine. The estimated magnitude of the nozzle side loads has a large impact on the design of the nozzle shell and the engine s thrust vector control system. In 2004 Marshall Space Flight Center (MSFC) began developing a capability to quantify the relative magnitude of side loads caused by different types of nozzle contours. The MSFC Nozzle Test Facility was modified to measure nozzle side loads during simulated nozzle start. Side load results from cold flow tests on two nozzle test articles, one with a truncated ideal contour and one with a parabolic contour are provided. The experimental approach, nozzle contour designs and wall static pressures are also discussed

  2. Vectorized Jiles-Atherton hysteresis model

    NASA Astrophysics Data System (ADS)

    Szymański, Grzegorz; Waszak, Michał

    2004-01-01

    This paper deals with vector hysteresis modeling. A vector model consisting of individual Jiles-Atherton components placed along principal axes is proposed. The cross-axis coupling ensures general vector model properties. Minor loops are obtained using scaling method. The model is intended for efficient finite element method computations defined in terms of magnetic vector potential. Numerical efficiency is ensured by differential susceptibility approach.

  3. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and principal vector similarity criteria. Poles to points are assigned to individual discontinuity objects using easy custom vector clustering and Jaccard distance approaches, and each object is segmented into planar clusters using an improved version of the DBSCAN algorithm. Modal set orientations are then recomputed by cluster-based orientation statistics to avoid the effects of biases related to cluster size and density heterogeneity of the point cloud. Finally, spacing values are measured between individual discontinuity clusters along scanlines parallel to modal pole vectors, whereas individual feature size (persistence) is measured using 3D convex hull bounding boxes. Spacing and size are provided both as raw population data and as summary statistics. The tool is optimized for parallel computing on 64bit systems, and a Graphic User Interface (GUI) has been developed to manage data processing, provide several outputs, including reclassified point clouds, tables, plots, derived fracture intensity parameters, and export to modelling software tools. We present test applications performed both on synthetic 3D data (simple 3D solids) and real case studies, validating the results with existing geomechanical datasets.

  4. Use of digital control theory state space formalism for feedback at SLC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Himel, T.; Hendrickson, L.; Rouse, F.

    The algorithms used in the database-driven SLC fast-feedback system are based on the state space formalism of digital control theory. These are implemented as a set of matrix equations which use a Kalman filter to estimate a vector of states from a vector of measurements, and then apply a gain matrix to determine the actuator settings from the state vector. The matrices used in the calculation are derived offline using Linear Quadratic Gaussian minimization. For a given noise spectrum, this procedure minimizes the rms of the states (e.g., the position or energy of the beam). The offline program also allowsmore » simulation of the loop's response to arbitrary inputs, and calculates its frequency response. 3 refs., 3 figs.« less

  5. A novel double fine guide sensor design on space telescope

    NASA Astrophysics Data System (ADS)

    Zhang, Xu-xu; Yin, Da-yi

    2018-02-01

    To get high precision attitude for space telescope, a double marginal FOV (field of view) FGS (Fine Guide Sensor) is proposed. It is composed of two large area APS CMOS sensors and both share the same lens in main light of sight. More star vectors can be get by two FGS and be used for high precision attitude determination. To improve star identification speed, the vector cross product in inter-star angles for small marginal FOV different from traditional way is elaborated and parallel processing method is applied to pyramid algorithm. The star vectors from two sensors are then used to attitude fusion with traditional QUEST algorithm. The simulation results show that the system can get high accuracy three axis attitudes and the scheme is feasibility.

  6. Finite Element Simulation of a Space Shuttle Solid Rocket Booster Aft Skirt Splashdown Using an Arbitrary Lagrangian-Eulerian Approach

    NASA Astrophysics Data System (ADS)

    Melis, Matthew E.

    2003-01-01

    Explicit finite element techniques employing an Arbitrary Lagrangian-Eulerian (ALE) methodology, within the transient dynamic code LS-DYNA, are used to predict splashdown loads on a proposed replacement/upgrade of the hydrazine tanks on the thrust vector control system housed within the aft skirt of a Space Shuttle Solid Rocket Booster. Two preliminary studies are performed prior to the full aft skirt analysis: An analysis of the proposed tank impacting water without supporting aft skirt structure, and an analysis of space capsule water drop tests conducted at NASA's Langley Research Center. Results from the preliminary studies provide confidence that useful predictions can be made by applying the ALE methodology to a detailed analysis of a 26-degree section of the skirt with proposed tank attached. Results for all three studies are presented and compared to limited experimental data. The challenges of using the LS-DYNA ALE capability for this type of analysis are discussed.

  7. Finite Element Simulation of a Space Shuttle Solid Rocket Booster Aft Skirt Splashdown Using an Arbitrary Lagrangian-eulerian Approach

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.

    2003-01-01

    Explicit finite element techniques employing an Arbitrary Lagrangian-Eulerian (ALE) methodology, within the transient dynamic code LS-DYNA, are used to predict splashdown loads on a proposed replacement/upgrade of the hydrazine tanks on the thrust vector control system housed within the aft skirt of a Space Shuttle Solid Rocket Booster. Two preliminary studies are performed prior to the full aft skirt analysis: An analysis of the proposed tank impacting water without supporting aft skirt structure, and an analysis of space capsule water drop tests conducted at NASA's Langley Research Center. Results from the preliminary studies provide confidence that useful predictions can be made by applying the ALE methodology to a detailed analysis of a 26-degree section of the skirt with proposed tank attached. Results for all three studies are presented and compared to limited experimental data. The challenges of using the LS-DYNA ALE capability for this type of analysis are discussed.

  8. Towards causal patch physics in dS/CFT

    NASA Astrophysics Data System (ADS)

    Neiman, Yasha

    2018-01-01

    This contribution is a status report on a research program aimed at obtaining quantum-gravitational physics inside a cosmological horizon through dS/CFT, i.e. through a holographic description at past/future infinity of de Sitter space. The program aims to bring together two main elements. The first is the observation by Anninos, Hartman and Strominger that Vasiliev's higher-spin gravity provides a working model for dS/CFT in 3+1 dimensions. The second is the proposal by Parikh, Savonije and Verlinde that dS/CFT may prove more tractable if one works in so-called "elliptic" de Sitter space - a folded-in-half version of global de Sitter where antipodal points have been identified. We review some relevant progress concerning quantum field theory on elliptic de Sitter space, higher-spin gravity and its holographic duality with a free vector model. We present our reasons for optimism that the approach outlined here will lead to a full holographic description of quantum (higher-spin) gravity in the causal patch of a de Sitter observer.

  9. Predication-based semantic indexing: permutations as a means to encode predications in semantic space.

    PubMed

    Cohen, Trevor; Schvaneveldt, Roger W; Rindflesch, Thomas C

    2009-11-14

    Corpus-derived distributional models of semantic distance between terms have proved useful in a number of applications. For both theoretical and practical reasons, it is desirable to extend these models to encode discrete concepts and the ways in which they are related to one another. In this paper, we present a novel vector space model that encodes semantic predications derived from MEDLINE by the SemRep system into a compact spatial representation. The associations captured by this method are of a different and complementary nature to those derived by traditional vector space models, and the encoding of predication types presents new possibilities for knowledge discovery and information retrieval.

  10. The Antibiotic-free pFAR4 Vector Paired with the Sleeping Beauty Transposon System Mediates Efficient Transgene Delivery in Human Cells.

    PubMed

    Pastor, Marie; Johnen, Sandra; Harmening, Nina; Quiviger, Mickäel; Pailloux, Julie; Kropp, Martina; Walter, Peter; Ivics, Zoltán; Izsvák, Zsuzsanna; Thumann, Gabriele; Scherman, Daniel; Marie, Corinne

    2018-06-01

    The anti-angiogenic and neurogenic pigment epithelium-derived factor (PEDF) demonstrated a potency to control choroidal neovascularization in age-related macular degeneration (AMD) patients. The goal of the present study was the development of an efficient and safe technique to integrate, ex vivo, the PEDF gene into retinal pigment epithelial (RPE) cells for later transplantation to the subretinal space of AMD patients to allow continuous PEDF secretion in the vicinity of the affected macula. Because successful gene therapy approaches require efficient gene delivery and stable gene expression, we used the antibiotic-free pFAR4 mini-plasmid vector to deliver the hyperactive Sleeping Beauty transposon system, which mediates transgene integration into the genome of host cells. In an initial study, lipofection-mediated co-transfection of HeLa cells with the SB100X transposase gene and a reporter marker delivered by pFAR4 showed a 2-fold higher level of genetically modified cells than when using the pT2 vectors. Similarly, with the pFAR4 constructs, electroporation-mediated transfection of primary human RPE cells led to 2.4-fold higher secretion of recombinant PEDF protein, which was still maintained 8 months after transfection. Thus, our results show that the pFAR4 plasmid is a superior vector for the delivery and integration of transgenes into eukaryotic cells. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Colonization of a newly constructed urban wetland by mosquitoes in England: implications for nuisance and vector species.

    PubMed

    Medlock, Jolyon M; Vaux, Alexander G C

    2014-12-01

    Urban wetlands are being created in the UK as part of sustainable urban drainage strategies, to create wetland habitats lost during development, to provide a habitat for protected species, and to increase the public's access to 'blue-space' for the improvement of health and well-being. Sewage treatment reedbeds are also being incorporated into newly constructed wetlands to offer an alternative approach to dealing with sewage. This field study aims to provide the first UK evidence of how such newly constructed aquatic habitats are colonized by mosquitoes. A number of new aquatic habitats were surveyed for immature mosquitoes every fortnight over the first two years following wetland construction. The majority of mosquitoes collected were Culex sp. and were significantly associated with the sewage treatment reedbed system, particularly following storm events and sewage inflow. Other more natural aquatic habitats that were subject to cycles of drying and re-wetting contributed the majority of the remaining mosquitoes colonizing. Colonization of permanent habitats was slow, particularly where fluctuations in water levels inhibited emergent vegetation growth. It is recommended that during the planning process for newly constructed wetlands consideration is given on a case-by-case basis to the impact of mosquitoes, either as a cause of nuisance or as potential vectors. Although ornithophagic Culex dominated in this wetland, their potential role as enzootic West Nile virus vectors should not be overlooked. © 2014 The Society for Vector Ecology.

  12. Real-time optical laboratory solution of parabolic differential equations

    NASA Technical Reports Server (NTRS)

    Casasent, David; Jackson, James

    1988-01-01

    An optical laboratory matrix-vector processor is used to solve parabolic differential equations (the transient diffusion equation with two space variables and time) by an explicit algorithm. This includes optical matrix-vector nonbase-2 encoded laboratory data, the combination of nonbase-2 and frequency-multiplexed data on such processors, a high-accuracy optical laboratory solution of a partial differential equation, new data partitioning techniques, and a discussion of a multiprocessor optical matrix-vector architecture.

  13. New perspectives in tracing vector-borne interaction networks.

    PubMed

    Gómez-Díaz, Elena; Figuerola, Jordi

    2010-10-01

    Disentangling trophic interaction networks in vector-borne systems has important implications in epidemiological and evolutionary studies. Molecular methods based on bloodmeal typing in vectors have been increasingly used to identify hosts. Although most molecular approaches benefit from good specificity and sensitivity, their temporal resolution is limited by the often rapid digestion of blood, and mixed bloodmeals still remain a challenge for bloodmeal identification in multi-host vector systems. Stable isotope analyses represent a novel complementary tool that can overcome some of these problems. The utility of these methods using examples from different vector-borne systems are discussed and the extents to which they are complementary and versatile are highlighted. There are excellent opportunities for progress in the study of vector-borne transmission networks resulting from the integration of both molecular and stable isotope approaches. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Segmentation of discrete vector fields.

    PubMed

    Li, Hongyu; Chen, Wenbin; Shen, I-Fan

    2006-01-01

    In this paper, we propose an approach for 2D discrete vector field segmentation based on the Green function and normalized cut. The method is inspired by discrete Hodge Decomposition such that a discrete vector field can be broken down into three simpler components, namely, curl-free, divergence-free, and harmonic components. We show that the Green Function Method (GFM) can be used to approximate the curl-free and the divergence-free components to achieve our goal of the vector field segmentation. The final segmentation curves that represent the boundaries of the influence region of singularities are obtained from the optimal vector field segmentations. These curves are composed of piecewise smooth contours or streamlines. Our method is applicable to both linear and nonlinear discrete vector fields. Experiments show that the segmentations obtained using our approach essentially agree with human perceptual judgement.

  15. A variational approach to dynamics of flexible multibody systems

    NASA Technical Reports Server (NTRS)

    Wu, Shih-Chin; Haug, Edward J.; Kim, Sung-Soo

    1989-01-01

    This paper presents a variational formulation of constrained dynamics of flexible multibody systems, using a vector-variational calculus approach. Body reference frames are used to define global position and orientation of individual bodies in the system, located and oriented by position of its origin and Euler parameters, respectively. Small strain linear elastic deformation of individual components, relative to their body references frames, is defined by linear combinations of deformation modes that are induced by constraint reaction forces and normal modes of vibration. A library of kinematic couplings between flexible and/or rigid bodies is defined and analyzed. Variational equations of motion for multibody systems are obtained and reduced to mixed differential-algebraic equations of motion. A space structure that must deform during deployment is analyzed, to illustrate use of the methods developed.

  16. Adeno-associated virus type 8 vector–mediated expression of siRNA targeting vascular endothelial growth factor efficiently inhibits neovascularization in a murine choroidal neovascularization model

    PubMed Central

    Igarashi, Tsutomu; Miyake, Noriko; Fujimoto, Chiaki; Yaguchi, Chiemi; Iijima, Osamu; Shimada, Takashi; Takahashi, Hiroshi

    2014-01-01

    Purpose To assess the feasibility of a gene therapeutic approach to treating choroidal neovascularization (CNV), we generated an adeno-associated virus type 8 vector (AAV2/8) encoding an siRNA targeting vascular endothelial growth factor (VEGF), and determined the AAV2/8 vector’s ability to inhibit angiogenesis. Methods We initially transfected 3T3 cells expressing VEGF with the AAV2/8 plasmid vector psiRNA-VEGF using the H1 promoter and found that VEGF expression was significantly diminished in the transfectants. We next injected 1 μl (3 × 1014 vg/ml) of AAV2/8 vector encoding siRNA targeting VEGF (AAV2/8/SmVEGF-2; n = 12) or control vector encoding green fluorescent protein (GFP) (AAV2/8/GFP; n = 14) into the subretinal space in C57BL/6 mice. One week later, CNV was induced by using a diode laser to make four separate choroidal burns around the optic nerve in each eye. After an additional 2 weeks, the eyes were removed for flat mount analysis of the CNV surface area. Results Subretinal delivery of AAV2/8/SmVEGF-2 significantly diminished CNV at the laser lesions, compared to AAV8/GFP (1597.3±2077.2 versus 5039.5±4055.9 µm2; p<0.05). Using an enzyme-linked immunosorbent assay, we found that VEGF levels were reduced by approximately half in the AAV2/8/SmVEGF-2 treated eyes. Conclusions These results suggest that siRNA-VEGF can be expressed across the retina and that long-term suppression of CNV is possible through the use of stable AAV2/8-mediated siRNA-VEGF expression. In vivo gene therapy may thus be a feasible approach to the clinical management of CNV in conditions such as age-related macular degeneration. PMID:24744609

  17. From elementary flux modes to elementary flux vectors: Metabolic pathway analysis with arbitrary linear flux constraints.

    PubMed

    Klamt, Steffen; Regensburger, Georg; Gerstl, Matthias P; Jungreuthmayer, Christian; Schuster, Stefan; Mahadevan, Radhakrishnan; Zanghellini, Jürgen; Müller, Stefan

    2017-04-01

    Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks.

  18. From elementary flux modes to elementary flux vectors: Metabolic pathway analysis with arbitrary linear flux constraints

    PubMed Central

    Klamt, Steffen; Gerstl, Matthias P.; Jungreuthmayer, Christian; Mahadevan, Radhakrishnan; Müller, Stefan

    2017-01-01

    Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks. PMID:28406903

  19. Risk assessments for exposure of deployed military personnel to insecticides and personal protective measures used for disease-vector management.

    PubMed

    Macedo, Paula A; Peterson, Robert K D; Davis, Ryan S

    2007-10-01

    Infectious diseases are problematic for deployed military forces throughout the world, and, historically, more military service days have been lost to insect-vectored diseases than to combat. Because of the limitations in efficacy and availability of both vaccines and therapeutic drugs, vector management often is the best tool that military personnel have against most vector-borne pathogens. However, the use of insecticides may raise concerns about the safety of their effects on the health of the military personnel exposed to them. Therefore, our objective was to use risk assessment methodologies to evaluate health risks to deployed U.S. military personnel from vector management tactics. Our conservative tier-1, quantitative risk assessment focused on acute, subchronic, and chronic exposures and cancer risks to military personnel after insecticide application and use of personal protective measures in different scenarios. Exposures were estimated for every scenario, chemical, and pathway. Acute, subchronic, and chronic risks were assessed using a margin of exposure (MOE) approach. Our MOE was the ratio of a no-observed-adverse-effect level (NOAEL) to an estimated exposure. MOEs were greater than the levels of concern (LOCs) for all surface residual and indoor space spraying exposures, except acute dermal exposure to lambda-cyhalothrin. MOEs were greater than the LOCs for all chemicals in the truck-mounted ultra-low-volume (ULV) exposure scenario. The aggregate cancer risk for permethrin exceeded 1 x 10(-6), but more realistic exposure refinements would reduce the cancer risk below that value. Overall, results indicate that health risks from exposures to insecticides and personal protective measures used by military personnel are low.

  20. A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization

    NASA Astrophysics Data System (ADS)

    Binz, Ernst; Pods, Sonja

    2006-01-01

    In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group HX∞. A representation of the C*-group algebra of HX∞ is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.

  1. Vector boson fusion in the inert doublet model

    NASA Astrophysics Data System (ADS)

    Dutta, Bhaskar; Palacio, Guillermo; Restrepo, Diego; Ruiz-Álvarez, José D.

    2018-03-01

    In this paper we probe the inert Higgs doublet model at the LHC using vector boson fusion (VBF) search strategy. We optimize the selection cuts and investigate the parameter space of the model and we show that the VBF search has a better reach when compared with the monojet searches. We also investigate the Drell-Yan type cuts and show that they can be important for smaller charged Higgs masses. We determine the 3 σ reach for the parameter space using these optimized cuts for a luminosity of 3000 fb-1 .

  2. Non-lightlike ruled surfaces with constant curvatures in Minkowski 3-space

    NASA Astrophysics Data System (ADS)

    Ali, Ahmad Tawfik

    We study the non-lightlike ruled surfaces in Minkowski 3-space with non-lightlike base curve c(s) =∫(αt + βn + γb)ds, where t, n, b are the tangent, principal normal and binormal vectors of an arbitrary timelike curve Γ(s). Some important results of flat, minimal, II-minimal and II-flat non-lightlike ruled surfaces are studied. Finally, the following interesting theorem is proved: the only non-zero constant mean curvature (CMC) non-lightlike ruled surface is developable timelike ruled surface generated by binormal vector.

  3. Dual-scale topology optoelectronic processor.

    PubMed

    Marsden, G C; Krishnamoorthy, A V; Esener, S C; Lee, S H

    1991-12-15

    The dual-scale topology optoelectronic processor (D-STOP) is a parallel optoelectronic architecture for matrix algebraic processing. The architecture can be used for matrix-vector multiplication and two types of vector outer product. The computations are performed electronically, which allows multiplication and summation concepts in linear algebra to be generalized to various nonlinear or symbolic operations. This generalization permits the application of D-STOP to many computational problems. The architecture uses a minimum number of optical transmitters, which thereby reduces fabrication requirements while maintaining area-efficient electronics. The necessary optical interconnections are space invariant, minimizing space-bandwidth requirements.

  4. Enhancing vector shoreline data using a data fusion approach

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark; Nebrich, Mark; DeMichele, David

    2017-05-01

    Vector shoreline (VSL) data is potentially useful in ATR systems that distinguish between objects on land or water. Unfortunately available data such as the NOAA 1:250,000 World Vector Shoreline and NGA Prototype Global Shoreline data cannot be used by themselves to make a land/water determination because of the manner in which the data are compiled. We describe a data fusion approach for creating labeled VSL data using test points from Global 30 Arc-Second Elevation (GTOPO30) data to determine the direction of vector segments; i.e., whether they are in clockwise or counterclockwise order. We show consistently labeled VSL data be used to easily determine whether a point is on land or water using a vector cross product test.

  5. A Bird's-Eye View of Molecular Changes in Plant Gravitropism Using Omics Techniques.

    PubMed

    Schüler, Oliver; Hemmersbach, Ruth; Böhmer, Maik

    2015-01-01

    During evolution, plants have developed mechanisms to adapt to a variety of environmental stresses, including drought, high salinity, changes in carbon dioxide levels and pathogens. Central signaling hubs and pathways that are regulated in response to these stimuli have been identified. In contrast to these well studied environmental stimuli, changes in transcript, protein and metabolite levels in response to a gravitational stimulus are less well understood. Amyloplasts, localized in statocytes of the root tip, in mesophyll cells of coleoptiles and in the elongation zone of the growing internodes comprise statoliths in higher plants. Deviations of the statocytes with respect to the earthly gravity vector lead to a displacement of statoliths relative to the cell due to their inertia and thus to gravity perception. Downstream signaling events, including the conversion from the biophysical signal of sedimentation of distinct heavy mass to a biochemical signal, however, remain elusive. More recently, technical advances, including clinostats, drop towers, parabolic flights, satellites, and the International Space Station, allowed researchers to study the effect of altered gravity conditions - real and simulated micro- as well as hypergravity on plants. This allows for a unique opportunity to study plant responses to a purely anthropogenic stress for which no evolutionary program exists. Furthermore, the requirement for plants as food and oxygen sources during prolonged manned space explorations led to an increased interest in the identi-fication of genes involved in the adaptation of plants to microgravity. Transcriptomic, proteomic, phosphoproteomic, and metabolomic profiling strategies provide a sensitive high-throughput approach to identify biochemical alterations in response to changes with respect to the influence of the gravitational vector and thus the acting gravitational force on the transcript, protein and metabolite level. This review aims at summarizing recent experimental approaches and discusses major observations.

  6. A Fosmid Cloning Strategy for Detecting the Widest Possible Spectrum of Microbes from the International Space Station Drinking Water System

    PubMed Central

    Choi, Sangdun; Chang, Mi Sook; Stuecker, Tara; Chung, Christine; Newcombe, David A.; Venkateswaran, Kasthuri

    2012-01-01

    In this study, fosmid cloning strategies were used to assess the microbial populations in water from the International Space Station (ISS) drinking water system (henceforth referred to as Prebiocide and Tank A water samples). The goals of this study were: to compare the sensitivity of the fosmid cloning strategy with that of traditional culture-based and 16S rRNA-based approaches and to detect the widest possible spectrum of microbial populations during the water purification process. Initially, microbes could not be cultivated, and conventional PCR failed to amplify 16S rDNA fragments from these low biomass samples. Therefore, randomly primed rolling-circle amplification was used to amplify any DNA that might be present in the samples, followed by size selection by using pulsed-field gel electrophoresis. The amplified high-molecular-weight DNA from both samples was cloned into fosmid vectors. Several hundred clones were randomly selected for sequencing, followed by Blastn/Blastx searches. Sequences encoding specific genes from Burkholderia, a species abundant in the soil and groundwater, were found in both samples. Bradyrhizobium and Mesorhizobium, which belong to rhizobia, a large community of nitrogen fixers often found in association with plant roots, were present in the Prebiocide samples. Ralstonia, which is prevalent in soils with a high heavy metal content, was detected in the Tank A samples. The detection of many unidentified sequences suggests the presence of potentially novel microbial fingerprints. The bacterial diversity detected in this pilot study using a fosmid vector approach was higher than that detected by conventional 16S rRNA gene sequencing. PMID:23346038

  7. A Bird’s-Eye View of Molecular Changes in Plant Gravitropism Using Omics Techniques

    PubMed Central

    Schüler, Oliver; Hemmersbach, Ruth; Böhmer, Maik

    2015-01-01

    During evolution, plants have developed mechanisms to adapt to a variety of environmental stresses, including drought, high salinity, changes in carbon dioxide levels and pathogens. Central signaling hubs and pathways that are regulated in response to these stimuli have been identified. In contrast to these well studied environmental stimuli, changes in transcript, protein and metabolite levels in response to a gravitational stimulus are less well understood. Amyloplasts, localized in statocytes of the root tip, in mesophyll cells of coleoptiles and in the elongation zone of the growing internodes comprise statoliths in higher plants. Deviations of the statocytes with respect to the earthly gravity vector lead to a displacement of statoliths relative to the cell due to their inertia and thus to gravity perception. Downstream signaling events, including the conversion from the biophysical signal of sedimentation of distinct heavy mass to a biochemical signal, however, remain elusive. More recently, technical advances, including clinostats, drop towers, parabolic flights, satellites, and the International Space Station, allowed researchers to study the effect of altered gravity conditions – real and simulated micro- as well as hypergravity on plants. This allows for a unique opportunity to study plant responses to a purely anthropogenic stress for which no evolutionary program exists. Furthermore, the requirement for plants as food and oxygen sources during prolonged manned space explorations led to an increased interest in the identi-fication of genes involved in the adaptation of plants to microgravity. Transcriptomic, proteomic, phosphoproteomic, and metabolomic profiling strategies provide a sensitive high-throughput approach to identify biochemical alterations in response to changes with respect to the influence of the gravitational vector and thus the acting gravitational force on the transcript, protein and metabolite level. This review aims at summarizing recent experimental approaches and discusses major observations. PMID:26734055

  8. An Adynamical, Graphical Approach to Quantum Gravity and Unification

    NASA Astrophysics Data System (ADS)

    Stuckey, W. M.; Silberstein, Michael; McDevitt, Timothy

    We use graphical field gradients in an adynamical, background independent fashion to propose a new approach to quantum gravity (QG) and unification. Our proposed reconciliation of general relativity (GR) and quantum field theory (QFT) is based on a modification of their graphical instantiations, i.e. Regge calculus and lattice gauge theory (LGT), respectively, which we assume are fundamental to their continuum counterparts. Accordingly, the fundamental structure is a graphical amalgam of space, time, and sources (in parlance of QFT) called a "space-time source element". These are fundamental elements of space, time, and sources, not source elements in space and time. The transition amplitude for a space-time source element is computed using a path integral with discrete graphical action. The action for a space-time source element is constructed from a difference matrix K and source vector J on the graph, as in lattice gauge theory. K is constructed from graphical field gradients so that it contains a non-trivial null space and J is then restricted to the row space of K, so that it is divergence-free and represents a conserved exchange of energy-momentum. This construct of K and J represents an adynamical global constraint (AGC) between sources, the space-time metric, and the energy-momentum content of the element, rather than a dynamical law for time-evolved entities. In this view, one manifestation of quantum gravity becomes evident when, for example, a single space-time source element spans adjoining simplices of the Regge calculus graph. Thus, energy conservation for the space-time source element includes contributions to the deficit angles between simplices. This idea is used to correct proper distance in the Einstein-de Sitter (EdS) cosmology model yielding a fit of the Union2 Compilation supernova data that matches ΛCDM without having to invoke accelerating expansion or dark energy. A similar modification to LGT results in an adynamical account of quantum interference.

  9. Optimal sampling strategies for detecting zoonotic disease epidemics.

    PubMed

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  10. Fault Diagnosis for Rolling Bearings under Variable Conditions Based on Visual Cognition

    PubMed Central

    Cheng, Yujie; Zhou, Bo; Lu, Chen; Yang, Chao

    2017-01-01

    Fault diagnosis for rolling bearings has attracted increasing attention in recent years. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper introduces a fault diagnosis method for rolling bearings under variable conditions based on visual cognition. The proposed method includes the following steps. First, the vibration signal data are transformed into a recurrence plot (RP), which is a two-dimensional image. Then, inspired by the visual invariance characteristic of the human visual system (HVS), we utilize speed up robust feature to extract fault features from the two-dimensional RP and generate a 64-dimensional feature vector, which is invariant to image translation, rotation, scaling variation, etc. Third, based on the manifold perception characteristic of HVS, isometric mapping, a manifold learning method that can reflect the intrinsic manifold embedded in the high-dimensional space, is employed to obtain a low-dimensional feature vector. Finally, a classical classification method, support vector machine, is utilized to realize fault diagnosis. Verification data were collected from Case Western Reserve University Bearing Data Center, and the experimental result indicates that the proposed fault diagnosis method based on visual cognition is highly effective for rolling bearings under variable conditions, thus providing a promising approach from the cognitive computing field. PMID:28772943

  11. Nonperturbative comparison of clover and highly improved staggered quarks in lattice QCD and the properties of the Φ meson

    DOE PAGES

    Chakraborty, Bipasha; Davies, C. T. H.; Donald, G. C.; ...

    2017-10-02

    Here, we compare correlators for pseudoscalar and vector mesons made from valence strange quarks using the clover quark and highly improved staggered quark (HISQ) formalisms in full lattice QCD. We use fully nonperturbative methods to normalise vector and axial vector current operators made from HISQ quarks, clover quarks and from combining HISQ and clover fields. This allows us to test expectations for the renormalisation factors based on perturbative QCD, with implications for the error budget of lattice QCD calculations of the matrix elements of clover-staggeredmore » $b$-light weak currents, as well as further HISQ calculations of the hadronic vacuum polarisation. We also compare the approach to the (same) continuum limit in clover and HISQ formalisms for the mass and decay constant of the $$\\phi$$ meson. Our final results for these parameters, using single-meson correlators and neglecting quark-line disconnected diagrams are: $$m_{\\phi} =$$ 1.023(5) GeV and $$f_{\\phi} = $$ 0.238(3) GeV in good agreement with experiment. These results come from calculations in the HISQ formalism using gluon fields that include the effect of $u$, $d$, $s$ and $c$ quarks in the sea with three lattice spacing values and $$m_{u/d}$$ values going down to the physical point.« less

  12. Margin-maximizing feature elimination methods for linear and nonlinear kernel-based discriminant functions.

    PubMed

    Aksu, Yaman; Miller, David J; Kesidis, George; Yang, Qing X

    2010-05-01

    Feature selection for classification in high-dimensional spaces can improve generalization, reduce classifier complexity, and identify important, discriminating feature "markers." For support vector machine (SVM) classification, a widely used technique is recursive feature elimination (RFE). We demonstrate that RFE is not consistent with margin maximization, central to the SVM learning approach. We thus propose explicit margin-based feature elimination (MFE) for SVMs and demonstrate both improved margin and improved generalization, compared with RFE. Moreover, for the case of a nonlinear kernel, we show that RFE assumes that the squared weight vector 2-norm is strictly decreasing as features are eliminated. We demonstrate this is not true for the Gaussian kernel and, consequently, RFE may give poor results in this case. MFE for nonlinear kernels gives better margin and generalization. We also present an extension which achieves further margin gains, by optimizing only two degrees of freedom--the hyperplane's intercept and its squared 2-norm--with the weight vector orientation fixed. We finally introduce an extension that allows margin slackness. We compare against several alternatives, including RFE and a linear programming method that embeds feature selection within the classifier design. On high-dimensional gene microarray data sets, University of California at Irvine (UCI) repository data sets, and Alzheimer's disease brain image data, MFE methods give promising results.

  13. Effective Moment Feature Vectors for Protein Domain Structures

    PubMed Central

    Shi, Jian-Yu; Yiu, Siu-Ming; Zhang, Yan-Ning; Chin, Francis Yuk-Lun

    2013-01-01

    Imaging processing techniques have been shown to be useful in studying protein domain structures. The idea is to represent the pairwise distances of any two residues of the structure in a 2D distance matrix (DM). Features and/or submatrices are extracted from this DM to represent a domain. Existing approaches, however, may involve a large number of features (100–400) or complicated mathematical operations. Finding fewer but more effective features is always desirable. In this paper, based on some key observations on DMs, we are able to decompose a DM image into four basic binary images, each representing the structural characteristics of a fundamental secondary structure element (SSE) or a motif in the domain. Using the concept of moments in image processing, we further derive 45 structural features based on the four binary images. Together with 4 features extracted from the basic images, we represent the structure of a domain using 49 features. We show that our feature vectors can represent domain structures effectively in terms of the following. (1) We show a higher accuracy for domain classification. (2) We show a clear and consistent distribution of domains using our proposed structural vector space. (3) We are able to cluster the domains according to our moment features and demonstrate a relationship between structural variation and functional diversity. PMID:24391828

  14. Relative Navigation of Formation Flying Satellites

    NASA Technical Reports Server (NTRS)

    Long, Anne; Kelbel, David; Lee, Taesul; Leung, Dominic; Carpenter, Russell; Gramling, Cheryl; Bauer, Frank (Technical Monitor)

    2002-01-01

    The Guidance, Navigation, and Control Center (GNCC) at Goddard Space Flight Center (GSFC) has successfully developed high-accuracy autonomous satellite navigation systems using the National Aeronautics and Space Administration's (NASA's) space and ground communications systems and the Global Positioning System (GPS). In addition, an autonomous navigation system that uses celestial object sensor measurements is currently under development and has been successfully tested using real Sun and Earth horizon measurements.The GNCC has developed advanced spacecraft systems that provide autonomous navigation and control of formation flyers in near-Earth, high-Earth, and libration point orbits. To support this effort, the GNCC is assessing the relative navigation accuracy achievable for proposed formations using GPS, intersatellite crosslink, ground-to-satellite Doppler, and celestial object sensor measurements. This paper evaluates the performance of these relative navigation approaches for three proposed missions with two or more vehicles maintaining relatively tight formations. High-fidelity simulations were performed to quantify the absolute and relative navigation accuracy as a function of navigation algorithm and measurement type. Realistically-simulated measurements were processed using the extended Kalman filter implemented in the GPS Enhanced Inboard Navigation System (GEONS) flight software developed by GSFC GNCC. Solutions obtained by simultaneously estimating all satellites in the formation were compared with the results obtained using a simpler approach based on differencing independently estimated state vectors.

  15. Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.

    PubMed

    Balfer, Jenny; Hu, Ye; Bajorath, Jürgen

    2014-08-01

    Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Maxwell Equations and the Redundant Gauge Degree of Freedom

    ERIC Educational Resources Information Center

    Wong, Chun Wa

    2009-01-01

    On transformation to the Fourier space (k,[omega]), the partial differential Maxwell equations simplify to algebraic equations, and the Helmholtz theorem of vector calculus reduces to vector algebraic projections. Maxwell equations and their solutions can then be separated readily into longitudinal and transverse components relative to the…

  17. Rate determination from vector observations

    NASA Technical Reports Server (NTRS)

    Weiss, Jerold L.

    1993-01-01

    Vector observations are a common class of attitude data provided by a wide variety of attitude sensors. Attitude determination from vector observations is a well-understood process and numerous algorithms such as the TRIAD algorithm exist. These algorithms require measurement of the line of site (LOS) vector to reference objects and knowledge of the LOS directions in some predetermined reference frame. Once attitude is determined, it is a simple matter to synthesize vehicle rate using some form of lead-lag filter, and then, use it for vehicle stabilization. Many situations arise, however, in which rate knowledge is required but knowledge of the nominal LOS directions are not available. This paper presents two methods for determining spacecraft angular rates from vector observations without a priori knowledge of the vector directions. The first approach uses an extended Kalman filter with a spacecraft dynamic model and a kinematic model representing the motion of the observed LOS vectors. The second approach uses a 'differential' TRIAD algorithm to compute the incremental direction cosine matrix, from which vehicle rate is then derived.

  18. Learn the Lagrangian: A Vector-Valued RKHS Approach to Identifying Lagrangian Systems.

    PubMed

    Cheng, Ching-An; Huang, Han-Pang

    2016-12-01

    We study the modeling of Lagrangian systems with multiple degrees of freedom. Based on system dynamics, canonical parametric models require ad hoc derivations and sometimes simplification for a computable solution; on the other hand, due to the lack of prior knowledge in the system's structure, modern nonparametric models in machine learning face the curse of dimensionality, especially in learning large systems. In this paper, we bridge this gap by unifying the theories of Lagrangian systems and vector-valued reproducing kernel Hilbert space. We reformulate Lagrangian systems with kernels that embed the governing Euler-Lagrange equation-the Lagrangian kernels-and show that these kernels span a subspace capturing the Lagrangian's projection as inverse dynamics. By such property, our model uses only inputs and outputs as in machine learning and inherits the structured form as in system dynamics, thereby removing the need for the mundane derivations for new systems as well as the generalization problem in learning from scratches. In effect, it learns the system's Lagrangian, a simpler task than directly learning the dynamics. To demonstrate, we applied the proposed kernel to identify the robot inverse dynamics in simulations and experiments. Our results present a competitive novel approach to identifying Lagrangian systems, despite using only inputs and outputs.

  19. Interactive optimization approach for optimal impulsive rendezvous using primer vector and evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Luo, Ya-Zhong; Zhang, Jin; Li, Hai-yang; Tang, Guo-Jin

    2010-08-01

    In this paper, a new optimization approach combining primer vector theory and evolutionary algorithms for fuel-optimal non-linear impulsive rendezvous is proposed. The optimization approach is designed to seek the optimal number of impulses as well as the optimal impulse vectors. In this optimization approach, adding a midcourse impulse is determined by an interactive method, i.e. observing the primer-magnitude time history. An improved version of simulated annealing is employed to optimize the rendezvous trajectory with the fixed-number of impulses. This interactive approach is evaluated by three test cases: coplanar circle-to-circle rendezvous, same-circle rendezvous and non-coplanar rendezvous. The results show that the interactive approach is effective and efficient in fuel-optimal non-linear rendezvous design. It can guarantee solutions, which satisfy the Lawden's necessary optimality conditions.

  20. Mach's principle: Exact frame-dragging via gravitomagnetism in perturbed Friedmann-Robertson-Walker universes with K=({+-}1,0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmid, Christoph

    We show that there is exact dragging of the axis directions of local inertial frames by a weighted average of the cosmological energy currents via gravitomagnetism for all linear perturbations of all Friedmann-Robertson-Walker (FRW) universes and of Einstein's static closed universe, and for all energy-momentum-stress tensors and in the presence of a cosmological constant. This includes FRW universes arbitrarily close to the Milne Universe and the de Sitter universe. Hence the postulate formulated by Ernst Mach about the physical cause for the time-evolution of inertial axes is shown to hold in general relativity for linear perturbations of FRW universes. -more » The time-evolution of local inertial axes (relative to given local fiducial axes) is given experimentally by the precession angular velocity {omega}-vector{sub gyro} of local gyroscopes, which in turn gives the operational definition of the gravitomagnetic field: B-vector{sub g}{identical_to}-2{omega}-vector{sub gyro}. The gravitomagnetic field is caused by energy currents J-vector{sub {epsilon}} via the momentum constraint, Einstein's G{sup 0-}circumflex{sub i-circumflex} equation, (-{delta}+{mu}{sup 2})A-vector{sub g}=-16{pi}G{sub N}J-vector{sub {epsilon}} with B-vector{sub g}=curl A-vector{sub g}. This equation is analogous to Ampere's law, but it holds for all time-dependent situations. {delta} is the de Rham-Hodge Laplacian, and {delta}=-curl curl for the vorticity sector in Riemannian 3-space. - In the solution for an open universe the 1/r{sup 2}-force of Ampere is replaced by a Yukawa force Y{sub {mu}}(r)=(-d/dr)[(1/R)exp(-{mu}r)], form-identical for FRW backgrounds with K=(-1,0). Here r is the measured geodesic distance from the gyroscope to the cosmological source, and 2{pi}R is the measured circumference of the sphere centered at the gyroscope and going through the source point. The scale of the exponential cutoff is the H-dot radius, where H is the Hubble rate, dot is the derivative with respect to cosmic time, and {mu}{sup 2}=-4(dH/dt). Analogous results hold in closed FRW universes and in Einstein's closed static universe.--We list six fundamental tests for the principle formulated by Mach: all of them are explicitly fulfilled by our solutions.--We show that only energy currents in the toroidal vorticity sector with l=1 can affect the precession of gyroscopes. We show that the harmonic decomposition of toroidal vorticity fields in terms of vector spherical harmonics X-vector{sub lm}{sup -} has radial functions which are form-identical for the 3-sphere, the hyperbolic 3-space, and Euclidean 3-space, and are form-identical with the spherical Bessel-, Neumann-, and Hankel functions. - The Appendix gives the de Rham-Hodge Laplacian on vorticity fields in Riemannian 3-spaces by equations connecting the calculus of differential forms with the curl notation. We also give the derivation the Weitzenboeck formula for the difference between the de Rham-Hodge Laplacian {delta} and the ''rough'' Laplacian {nabla}{sup 2} on vector fields.« less

  1. Do vegetated rooftops attract more mosquitoes? Monitoring disease vector abundance on urban green roofs.

    PubMed

    Wong, Gwendolyn K L; Jim, C Y

    2016-12-15

    Green roof, an increasingly common constituent of urban green infrastructure, can provide multiple ecosystem services and mitigate climate-change and urban-heat-island challenges. Its adoption has been beset by a longstanding preconception of attracting urban pests like mosquitoes. As more cities may become vulnerable to emerging and re-emerging mosquito-borne infectious diseases, the knowledge gap needs to be filled. This study gauges the habitat preference of vector mosquitoes for extensive green roofs vis-à-vis positive and negative control sites in an urban setting. Seven sites in a university campus were selected to represent three experimental treatments: green roofs (GR), ground-level blue-green spaces as positive controls (PC), and bare roofs as negative controls (NC). Mosquito-trapping devices were deployed for a year from March 2015 to 2016. Human-biting mosquito species known to transmit infectious diseases in the region were identified and recorded as target species. Generalized linear models evaluated the effects of site type, season, and weather on vector-mosquito abundance. Our model revealed site type as a significant predictor of vector mosquito abundance, with considerably more vector mosquitoes captured in PC than in GR and NC. Vector abundance was higher in NC than in GR, attributed to the occasional presence of water pools in depressions of roofing membrane after rainfall. Our data also demonstrated seasonal differences in abundance. Weather variables were evaluated to assess human-vector contact risks under different weather conditions. Culex quinquefasciatus, a competent vector of diseases including lymphatic filariasis and West Nile fever, could be the most adaptable species. Our analysis demonstrates that green roofs are not particularly preferred by local vector mosquitoes compared to bare roofs and other urban spaces in a humid subtropical setting. The findings call for a better understanding of vector ecology in diverse urban landscapes to improve disease control efficacy amidst surging urbanization and changing climate. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Space-Time Point Pattern Analysis of Flavescence Dorée Epidemic in a Grapevine Field: Disease Progression and Recovery

    PubMed Central

    Maggi, Federico; Bosco, Domenico; Galetto, Luciana; Palmano, Sabrina; Marzachì, Cristina

    2017-01-01

    Analyses of space-time statistical features of a flavescence dorée (FD) epidemic in Vitis vinifera plants are presented. FD spread was surveyed from 2011 to 2015 in a vineyard of 17,500 m2 surface area in the Piemonte region, Italy; count and position of symptomatic plants were used to test the hypothesis of epidemic Complete Spatial Randomness and isotropicity in the space-time static (year-by-year) point pattern measure. Space-time dynamic (year-to-year) point pattern analyses were applied to newly infected and recovered plants to highlight statistics of FD progression and regression over time. Results highlighted point patterns ranging from disperse (at small scales) to aggregated (at large scales) over the years, suggesting that the FD epidemic is characterized by multiscale properties that may depend on infection incidence, vector population, and flight behavior. Dynamic analyses showed moderate preferential progression and regression along rows. Nearly uniform distributions of direction and negative exponential distributions of distance of newly symptomatic and recovered plants relative to existing symptomatic plants highlighted features of vector mobility similar to Brownian motion. These evidences indicate that space-time epidemics modeling should include environmental setting (e.g., vineyard geometry and topography) to capture anisotropicity as well as statistical features of vector flight behavior, plant recovery and susceptibility, and plant mortality. PMID:28111581

  3. Robust and Efficient Spin Purification for Determinantal Configuration Interaction.

    PubMed

    Fales, B Scott; Hohenstein, Edward G; Levine, Benjamin G

    2017-09-12

    The limited precision of floating point arithmetic can lead to the qualitative and even catastrophic failure of quantum chemical algorithms, especially when high accuracy solutions are sought. For example, numerical errors accumulated while solving for determinantal configuration interaction wave functions via Davidson diagonalization may lead to spin contamination in the trial subspace. This spin contamination may cause the procedure to converge to roots with undesired ⟨Ŝ 2 ⟩, wasting computer time in the best case and leading to incorrect conclusions in the worst. In hopes of finding a suitable remedy, we investigate five purification schemes for ensuring that the eigenvectors have the desired ⟨Ŝ 2 ⟩. These schemes are based on projection, penalty, and iterative approaches. All of these schemes rely on a direct, graphics processing unit-accelerated algorithm for calculating the S 2 c matrix-vector product. We assess the computational cost and convergence behavior of these methods by application to several benchmark systems and find that the first-order spin penalty method is the optimal choice, though first-order and Löwdin projection approaches also provide fast convergence to the desired spin state. Finally, to demonstrate the utility of these approaches, we computed the lowest several excited states of an open-shell silver cluster (Ag 19 ) using the state-averaged complete active space self-consistent field method, where spin purification was required to ensure spin stability of the CI vector coefficients. Several low-lying states with significant multiply excited character are predicted, suggesting the value of a multireference approach for modeling plasmonic nanomaterials.

  4. Rotating electrical machines: Poynting flow

    NASA Astrophysics Data System (ADS)

    Donaghy-Spargo, C.

    2017-09-01

    This paper presents a complementary approach to the traditional Lorentz and Faraday approaches that are typically adopted in the classroom when teaching the fundamentals of electrical machines—motors and generators. The approach adopted is based upon the Poynting vector, which illustrates the ‘flow’ of electromagnetic energy. It is shown through simple vector analysis that the energy-flux density flow approach can provide insight into the operation of electrical machines and it is also shown that the results are in agreement with conventional Maxwell stress-based theory. The advantage of this approach is its complementary completion of the physical picture regarding the electromechanical energy conversion process—it is also a means of maintaining student interest in this subject and as an unconventional application of the Poynting vector during normal study of electromagnetism.

  5. Reanalyzing the "far medial" (transcondylar-transtubercular) approach based on three anatomical vectors: the ventral posterolateral corridor.

    PubMed

    Chakravarthi, Srikant; Monroy-Sosa, Alejandro; Gonen, Lior; Fukui, Melanie; Rovin, Richard; Kojis, Nathaniel; Lindsay, Mark; Khalili, Sammy; Celix, Juanita; Corsten, Martin; Kassam, Amin B

    2018-06-01

    Endoscopic endonasal access to the jugular foramen and occipital condyle - the transcondylar-transtubercular approach - is anatomically complex and requires detailed knowledge of the relative position of critical neurovascular structures, in order to avoid inadvertent injury and resultant complications. However, access to this region can be confusing as the orientation and relationships of osseous, vascular, and neural structures are very much different from traditional dorsal approaches. This review aims at providing an organizational construct for a more understandable framework in accessing the transcondylar-transtubercular window. The region can be conceptualized using a three-vector coordinate system: vector 1 represents a dorsal or ventral corridor, vector 2 represents the outer and inner circumferential anatomical limits; in an "onion-skin" fashion, key osseous, vascular, and neural landmarks are organized based on a 360-degree skull base model, and vector 3 represents the final core or target of the surgical corridor. The creation of an organized "global-positioning system" may better guide the surgeon in accessing the far-medial transcondylar-transtubercular region, and related pathologies, and help understand the surgical limits to the occipital condyle and jugular foramen - the ventral posterolateral corridor - via the endoscopic endonasal approach.

  6. Vector-averaged gravity does not alter acetylcholine receptor single channel properties

    NASA Technical Reports Server (NTRS)

    Reitstetter, R.; Gruener, R.

    1994-01-01

    To examine the physiological sensitivity of membrane receptors to altered gravity, we examined the single channel properties of the acetylcholine receptor (AChR), in co-cultures of Xenopus myocytes and neurons, to vector-averaged gravity in the clinostat. This experimental paradigm produces an environment in which, from the cell's perspective, the gravitational vector is "nulled" by continuous averaging. In that respect, the clinostat simulates one aspect of space microgravity where the gravity force is greatly reduced. After clinorotation, the AChR channel mean open-time and conductance were statistically not different from control values but showed a rotation-dependent trend that suggests a process of cellular adaptation to clinorotation. These findings therefore suggest that the ACHR channel function may not be affected in the microgravity of space despite changes in the receptor's cellular organization.

  7. Test spaces and characterizations of quadratic spaces

    NASA Astrophysics Data System (ADS)

    Dvurečenskij, Anatolij

    1996-10-01

    We show that a test space consisting of nonzero vectors of a quadratic space E and of the set all maximal orthogonal systems in E is algebraic iff E is Dacey or, equivalently, iff E is orthomodular. In addition, we present another orthomodularity criteria of quadratic spaces, and using the result of Solèr, we show that they can imply that E is a real, complex, or quaternionic Hilbert space.

  8. A k-Space Method for Moderately Nonlinear Wave Propagation

    PubMed Central

    Jing, Yun; Wang, Tianren; Clement, Greg T.

    2013-01-01

    A k-space method for moderately nonlinear wave propagation in absorptive media is presented. The Westervelt equation is first transferred into k-space via Fourier transformation, and is solved by a modified wave-vector time-domain scheme. The present approach is not limited to forward propagation or parabolic approximation. One- and two-dimensional problems are investigated to verify the method by comparing results to analytic solutions and finite-difference time-domain (FDTD) method. It is found that to obtain accurate results in homogeneous media, the grid size can be as little as two points per wavelength, and for a moderately nonlinear problem, the Courant–Friedrichs–Lewy number can be as large as 0.4. Through comparisons with the conventional FDTD method, the k-space method for nonlinear wave propagation is shown here to be computationally more efficient and accurate. The k-space method is then employed to study three-dimensional nonlinear wave propagation through the skull, which shows that a relatively accurate focusing can be achieved in the brain at a high frequency by sending a low frequency from the transducer. Finally, implementations of the k-space method using a single graphics processing unit shows that it required about one-seventh the computation time of a single-core CPU calculation. PMID:22899114

  9. Construction of siRNA/miRNA expression vectors based on a one-step PCR process

    PubMed Central

    Xu, Jun; Zeng, Jie Qiong; Wan, Gang; Hu, Gui Bin; Yan, Hong; Ma, Li Xin

    2009-01-01

    Background RNA interference (RNAi) has become a powerful means for silencing target gene expression in mammalian cells and is envisioned to be useful in therapeutic approaches to human disease. In recent years, high-throughput, genome-wide screening of siRNA/miRNA libraries has emerged as a desirable approach. Current methods for constructing siRNA/miRNA expression vectors require the synthesis of long oligonucleotides, which is costly and suffers from mutation problems. Results Here we report an ingenious method to solve traditional problems associated with construction of siRNA/miRNA expression vectors. We synthesized shorter primers (< 50 nucleotides) to generate a linear expression structure by PCR. The PCR products were directly transformed into chemically competent E. coli and converted to functional vectors in vivo via homologous recombination. The positive clones could be easily screened under UV light. Using this method we successfully constructed over 500 functional siRNA/miRNA expression vectors. Sequencing of the vectors confirmed a high accuracy rate. Conclusion This novel, convenient, low-cost and highly efficient approach may be useful for high-throughput assays of RNAi libraries. PMID:19490634

  10. Accelerating 4D flow MRI by exploiting vector field divergence regularization.

    PubMed

    Santelli, Claudio; Loecher, Michael; Busch, Julia; Wieben, Oliver; Schaeffter, Tobias; Kozerke, Sebastian

    2016-01-01

    To improve velocity vector field reconstruction from undersampled four-dimensional (4D) flow MRI by penalizing divergence of the measured flow field. Iterative image reconstruction in which magnitude and phase are regularized separately in alternating iterations was implemented. The approach allows incorporating prior knowledge of the flow field being imaged. In the present work, velocity data were regularized to reduce divergence, using either divergence-free wavelets (DFW) or a finite difference (FD) method using the ℓ1-norm of divergence and curl. The reconstruction methods were tested on a numerical phantom and in vivo data. Results of the DFW and FD approaches were compared with data obtained with standard compressed sensing (CS) reconstruction. Relative to standard CS, directional errors of vector fields and divergence were reduced by 55-60% and 38-48% for three- and six-fold undersampled data with the DFW and FD methods. Velocity vector displays of the numerical phantom and in vivo data were found to be improved upon DFW or FD reconstruction. Regularization of vector field divergence in image reconstruction from undersampled 4D flow data is a valuable approach to improve reconstruction accuracy of velocity vector fields. © 2014 Wiley Periodicals, Inc.

  11. Blending Velocities In Task Space In Computing Robot Motions

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.

    1995-01-01

    Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.

  12. Sample levitation and melt in microgravity

    NASA Technical Reports Server (NTRS)

    Moynihan, Philip I. (Inventor)

    1990-01-01

    A system is described for maintaining a sample material in a molten state and away from the walls of a container in a microgravity environment, as in a space vehicle. A plurality of sources of electromagnetic radiation, such as an infrared wavelength, are spaced about the object, with the total net electromagnetic radiation applied to the object being sufficient to maintain it in a molten state, and with the vector sum of the applied radiation being in a direction to maintain the sample close to a predetermined location away from the walls of a container surrounding the sample. For a processing system in a space vehicle that orbits the Earth, the net radiation vector is opposite the velocity of the orbiting vehicle.

  13. Sample levitation and melt in microgravity

    NASA Technical Reports Server (NTRS)

    Moynihan, Philip I. (Inventor)

    1987-01-01

    A system is described for maintaining a sample material in a molten state and away from the walls of a container in a microgravity environment, as in a space vehicle. A plurality of sources of electromagnetic radiation, such as of an infrared wavelength, are spaced about the object, with the total net electromagnetic radiation applied to the object being sufficient to maintain it in a molten state, and with the vector sum of the applied radiation being in a direction to maintain the sample close to a predetermined location away from the walls of a container surrounding the sample. For a processing system in a space vehicle that orbits the Earth, the net radiation vector is opposite the velocity of the orbiting vehicle.

  14. Supersymmetric dS/CFT

    NASA Astrophysics Data System (ADS)

    Hertog, Thomas; Tartaglino-Mazzucchelli, Gabriele; Van Riet, Thomas; Venken, Gerben

    2018-02-01

    We put forward new explicit realisations of dS/CFT that relate N = 2 supersymmetric Euclidean vector models with reversed spin-statistics in three dimensions to specific supersymmetric Vasiliev theories in four-dimensional de Sitter space. The partition function of the free supersymmetric vector model deformed by a range of low spin deformations that preserve supersymmetry appears to specify a well-defined wave function with asymptotic de Sitter boundary conditions in the bulk. In particular we find the wave function is globally peaked at undeformed de Sitter space, with a low amplitude for strong deformations. This suggests that supersymmetric de Sitter space is stable in higher-spin gravity and in particular free from ghosts. We speculate this is a limiting case of the de Sitter realizations in exotic string theories.

  15. Intertwined Hamiltonians in two-dimensional curved spaces

    NASA Astrophysics Data System (ADS)

    Aghababaei Samani, Keivan; Zarei, Mina

    2005-04-01

    The problem of intertwined Hamiltonians in two-dimensional curved spaces is investigated. Explicit results are obtained for Euclidean plane, Minkowski plane, Poincaré half plane (AdS2), de Sitter plane (dS2), sphere, and torus. It is shown that the intertwining operator is related to the Killing vector fields and the isometry group of corresponding space. It is shown that the intertwined potentials are closely connected to the integral curves of the Killing vector fields. Two problems are considered as applications of the formalism presented in the paper. The first one is the problem of Hamiltonians with equispaced energy levels and the second one is the problem of Hamiltonians whose spectrum is like the spectrum of a free particle.

  16. Regular and Chaotic Spatial Distribution of Bose-Einstein Condensed Atoms in a Ratchet Potential

    NASA Astrophysics Data System (ADS)

    Li, Fei; Xu, Lan; Li, Wenwu

    2018-02-01

    We study the regular and chaotic spatial distribution of Bose-Einstein condensed atoms with a space-dependent nonlinear interaction in a ratchet potential. There exists in the system a space-dependent atomic current that can be tuned via Feshbach resonance technique. In the presence of the space-dependent atomic current and a weak ratchet potential, the Smale-horseshoe chaos is studied and the Melnikov chaotic criterion is obtained. Numerical simulations show that the ratio between the intensities of optical potentials forming the ratchet potential, the wave vector of the laser producing the ratchet potential or the wave vector of the modulating laser can be chosen as the controlling parameters to result in or avoid chaotic spatial distributional states.

  17. A static investigation of the thrust vectoring system of the F/A-18 high-alpha research vehicle

    NASA Technical Reports Server (NTRS)

    Mason, Mary L.; Capone, Francis J.; Asbury, Scott C.

    1992-01-01

    A static (wind-off) test was conducted in the static test facility of the Langley 16-foot Transonic Tunnel to evaluate the vectoring capability and isolated nozzle performance of the proposed thrust vectoring system of the F/A-18 high alpha research vehicle (HARV). The thrust vectoring system consisted of three asymmetrically spaced vanes installed externally on a single test nozzle. Two nozzle configurations were tested: A maximum afterburner-power nozzle and a military-power nozzle. Vane size and vane actuation geometry were investigated, and an extensive matrix of vane deflection angles was tested. The nozzle pressure ratios ranged from two to six. The results indicate that the three vane system can successfully generate multiaxis (pitch and yaw) thrust vectoring. However, large resultant vector angles incurred large thrust losses. Resultant vector angles were always lower than the vane deflection angles. The maximum thrust vectoring angles achieved for the military-power nozzle were larger than the angles achieved for the maximum afterburner-power nozzle.

  18. Definition of Linear Color Models in the RGB Vector Color Space to Detect Red Peaches in Orchard Images Taken under Natural Illumination

    PubMed Central

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa) peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates. PMID:22969369

  19. Constraining biosphere CO2 flux at regional scale with WRF-CO2 4DVar assimilation system

    NASA Astrophysics Data System (ADS)

    Zheng, T.

    2017-12-01

    The WRF-CO2 4DVar assimilation system is updated to include (1) operators for tower based observations (2) chemistry initial and boundary condition in the state vector (3) mechanism for aggregation from simulation model grid to state vector space. The update system is first tested with synthetic data to ensure its accuracy. The system is then used to test regional scale CO2 inversion at MCI (Midcontinental intensive) sites where CO2 mole fraction data were collected at multiple high towers during 2007-2008. The model domain is set to center on Iowa and include 8 towers within its boundary, and it is of 12x12km horizontal grid spacing. First, the relative impacts of the initial and boundary condition are assessed by the system's adjoint model. This is done with 24, 48, 72 hour time span. Second, we assessed the impacts of the transport error, including the misrepresentation of the boundary layer and cumulus activities. Third, we evaluated the different aggregation approach from the native model grid to the control variables (including scaling factors for flux, initial and boundary conditions). Four, we assessed the inversion performance using CO2 observation with different time-interval, and from different tower levels. We also examined the appropriate treatment of the background and observation error covariance in relation with these varying observation data sets.

  20. Object Manifold Alignment for Multi-Temporal High Resolution Remote Sensing Images Classification

    NASA Astrophysics Data System (ADS)

    Gao, G.; Zhang, M.; Gu, Y.

    2017-05-01

    Multi-temporal remote sensing images classification is very useful for monitoring the land cover changes. Traditional approaches in this field mainly face to limited labelled samples and spectral drift of image information. With spatial resolution improvement, "pepper and salt" appears and classification results will be effected when the pixelwise classification algorithms are applied to high-resolution satellite images, in which the spatial relationship among the pixels is ignored. For classifying the multi-temporal high resolution images with limited labelled samples, spectral drift and "pepper and salt" problem, an object-based manifold alignment method is proposed. Firstly, multi-temporal multispectral images are cut to superpixels by simple linear iterative clustering (SLIC) respectively. Secondly, some features obtained from superpixels are formed as vector. Thirdly, a majority voting manifold alignment method aiming at solving high resolution problem is proposed and mapping the vector data to alignment space. At last, all the data in the alignment space are classified by using KNN method. Multi-temporal images from different areas or the same area are both considered in this paper. In the experiments, 2 groups of multi-temporal HR images collected by China GF1 and GF2 satellites are used for performance evaluation. Experimental results indicate that the proposed method not only has significantly outperforms than traditional domain adaptation methods in classification accuracy, but also effectively overcome the problem of "pepper and salt".

  1. Definition of linear color models in the RGB vector color space to detect red peaches in orchard images taken under natural illumination.

    PubMed

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa) peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates.

  2. Introducing the Filtered Park's and Filtered Extended Park's Vector Approach to detect broken rotor bars in induction motors independently from the rotor slots number

    NASA Astrophysics Data System (ADS)

    Gyftakis, Konstantinos N.; Marques Cardoso, Antonio J.; Antonino-Daviu, Jose A.

    2017-09-01

    The Park's Vector Approach (PVA), together with its variations, has been one of the most widespread diagnostic methods for electrical machines and drives. Regarding the broken rotor bars fault diagnosis in induction motors, the common practice is to rely on the width increase of the Park's Vector (PV) ring and then apply some more sophisticated signal processing methods. It is shown in this paper that this method can be unreliable and is strongly dependent on the magnetic poles and rotor slot numbers. To overcome this constraint, the novel Filtered Park's/Extended Park's Vector Approach (FPVA/FEPVA) is introduced. The investigation is carried out with FEM simulations and experimental testing. The results prove to satisfyingly coincide, whereas the proposed advanced FPVA method is desirably reliable.

  3. Anopheles Vectors in Mainland China While Approaching Malaria Elimination.

    PubMed

    Zhang, Shaosen; Guo, Shaohua; Feng, Xinyu; Afelt, Aneta; Frutos, Roger; Zhou, Shuisen; Manguin, Sylvie

    2017-11-01

    China is approaching malaria elimination; however, well-documented information on malaria vectors is still missing, which could hinder the development of appropriate surveillance strategies and WHO certification. This review summarizes the nationwide distribution of malaria vectors, their bionomic characteristics, control measures, and related studies. After several years of effort, the area of distribution of the principal malaria vectors was reduced, in particular for Anopheles lesteri (synonym: An. anthropophagus) and Anopheles dirus s.l., which nearly disappeared from their former endemic regions. Anopheles sinensis is becoming the predominant species in southwestern China. The bionomic characteristics of these species have changed, and resistance to insecticides was reported. There is a need to update surveillance tools and investigate the role of secondary vectors in malaria transmission. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Higher-order fluctuation-dissipation relations in plasma physics: Binary Coulomb systems

    NASA Astrophysics Data System (ADS)

    Golden, Kenneth I.

    2018-05-01

    A recent approach that led to compact frequency domain formulations of the cubic and quartic fluctuation-dissipation theorems (FDTs) for the classical one-component plasma (OCP) [Golden and Heath, J. Stat. Phys. 162, 199 (2016), 10.1007/s10955-015-1395-6] is generalized to accommodate binary ionic mixtures. Paralleling the procedure followed for the OCP, the basic premise underlying the present approach is that a (k ,ω ) 4-vector rotational symmetry, known to be a pivotal feature in the frequency domain architectures of the linear and quadratic fluctuation-dissipation relations for a variety of Coulomb plasmas [Golden et al., J. Stat. Phys. 6, 87 (1972), 10.1007/BF01023681; J. Stat. Phys. 29, 281 (1982), 10.1007/BF01020787; Golden, Phys. Rev. E 59, 228 (1999), 10.1103/PhysRevE.59.228], is expected to be a pivotal feature of the frequency domain architectures of the higher-order members of the FDT hierarchy. On this premise, each member, in its most tractable form, connects a single (p +1 )-point dynamical structure function to a linear combination of (p +1 )-order p density response functions; by definition, such a combination must also remain invariant under rotation of their (k1,ω1) ,(k2,ω2) ,...,(kp,ωp) , (k1+k2+⋯+kp,ω1+ω2+⋯+ωp) 4-vector arguments. Assigned to each 4-vector is a species index that corotates in lock step. Consistency is assured by matching the static limits of the resulting frequency domain cubic and quartic FDTs to their exact static counterparts independently derived in the present work via a conventional time-independent perturbation expansion of the Liouville distribution function in its macrocanonical form. The proposed procedure entirely circumvents the daunting issues of entangled Liouville space paths and nested Poisson brackets that one would encounter if one attempted to use the conventional time-dependent perturbation-theoretic Kubo approach to establish the frequency domain FDTs beyond quadratic order.

  5. Construction of fusion vectors of corynebacteria: expression of glutathione-S-transferase fusion protein in Corynebacterium acetoacidophilum ATCC 21476.

    PubMed

    Srivastava, Preeti; Deb, J K

    2002-07-02

    A series of fusion vectors containing glutathione-S-transferase (GST) were constructed by inserting GST fusion cassette of Escherichia coli vectors pGEX4T-1, -2 and -3 in corynebacterial vector pBK2. Efficient expression of GST driven by inducible tac promoter of E. coli was observed in Corynebacterium acetoacidophilum. Fusion of enhanced green fluorescent protein (EGFP) and streptokinase genes in this vector resulted in the synthesis of both the fusion proteins. The ability of this recombinant organism to produce several-fold more of the product in the extracellular medium than in the intracellular space would make this system quite attractive as far as the downstream processing of the product is concerned.

  6. Reverse chemical ecology approach for the identification of a mosquito oviposition attractant

    USDA-ARS?s Scientific Manuscript database

    Pheromones and other semiochemicals play a crucial role in today’s integrated pest and vector management strategies for controlling populations of insects causing loses to agriculture and vectoring diseases to humans. These semiochemicals are typically discovered by bioassay-guided approaches. Here,...

  7. A vectorial semantics approach to personality assessment.

    PubMed

    Neuman, Yair; Cohen, Yochai

    2014-04-23

    Personality assessment and, specifically, the assessment of personality disorders have traditionally been indifferent to computational models. Computational personality is a new field that involves the automatic classification of individuals' personality traits that can be compared against gold-standard labels. In this context, we introduce a new vectorial semantics approach to personality assessment, which involves the construction of vectors representing personality dimensions and disorders, and the automatic measurements of the similarity between these vectors and texts written by human subjects. We evaluated our approach by using a corpus of 2468 essays written by students who were also assessed through the five-factor personality model. To validate our approach, we measured the similarity between the essays and the personality vectors to produce personality disorder scores. These scores and their correspondence with the subjects' classification of the five personality factors reproduce patterns well-documented in the psychological literature. In addition, we show that, based on the personality vectors, we can predict each of the five personality factors with high accuracy.

  8. A Vectorial Semantics Approach to Personality Assessment

    NASA Astrophysics Data System (ADS)

    Neuman, Yair; Cohen, Yochai

    2014-04-01

    Personality assessment and, specifically, the assessment of personality disorders have traditionally been indifferent to computational models. Computational personality is a new field that involves the automatic classification of individuals' personality traits that can be compared against gold-standard labels. In this context, we introduce a new vectorial semantics approach to personality assessment, which involves the construction of vectors representing personality dimensions and disorders, and the automatic measurements of the similarity between these vectors and texts written by human subjects. We evaluated our approach by using a corpus of 2468 essays written by students who were also assessed through the five-factor personality model. To validate our approach, we measured the similarity between the essays and the personality vectors to produce personality disorder scores. These scores and their correspondence with the subjects' classification of the five personality factors reproduce patterns well-documented in the psychological literature. In addition, we show that, based on the personality vectors, we can predict each of the five personality factors with high accuracy.

  9. A Vectorial Semantics Approach to Personality Assessment

    PubMed Central

    Neuman, Yair; Cohen, Yochai

    2014-01-01

    Personality assessment and, specifically, the assessment of personality disorders have traditionally been indifferent to computational models. Computational personality is a new field that involves the automatic classification of individuals' personality traits that can be compared against gold-standard labels. In this context, we introduce a new vectorial semantics approach to personality assessment, which involves the construction of vectors representing personality dimensions and disorders, and the automatic measurements of the similarity between these vectors and texts written by human subjects. We evaluated our approach by using a corpus of 2468 essays written by students who were also assessed through the five-factor personality model. To validate our approach, we measured the similarity between the essays and the personality vectors to produce personality disorder scores. These scores and their correspondence with the subjects' classification of the five personality factors reproduce patterns well-documented in the psychological literature. In addition, we show that, based on the personality vectors, we can predict each of the five personality factors with high accuracy. PMID:24755833

  10. Conceptualizing Vectors in College Geometry: A New Framework for Analysis of Student Approaches and Difficulties

    ERIC Educational Resources Information Center

    Kwon, Oh Hoon

    2012-01-01

    This dissertation documents a new way of conceptualizing vectors in college mathematics, especially in geometry. First, I will introduce three problems to show the complexity and subtlety of the construct of vectors with the classical vector representations. These highlight the need for a new framework that: (1) differentiates abstraction from a…

  11. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods.

    PubMed

    Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon

    2016-01-22

    Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes' high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information.

  12. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods

    PubMed Central

    Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon

    2016-01-01

    Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes’ high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information. PMID:26805849

  13. Cloud field classification based upon high spatial resolution textural features. II - Simplified vector approaches

    NASA Technical Reports Server (NTRS)

    Chen, D. W.; Sengupta, S. K.; Welch, R. M.

    1989-01-01

    This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.

  14. Equilibrium polymerization on the equivalent-neighbor lattice

    NASA Technical Reports Server (NTRS)

    Kaufman, Miron

    1989-01-01

    The equilibrium polymerization problem is solved exactly on the equivalent-neighbor lattice. The Flory-Huggins (Flory, 1986) entropy of mixing is exact for this lattice. The discrete version of the n-vector model is verified when n approaches 0 is equivalent to the equal reactivity polymerization process in the whole parameter space, including the polymerized phase. The polymerization processes for polymers satisfying the Schulz (1939) distribution exhibit nonuniversal critical behavior. A close analogy is found between the polymerization problem of index the Schulz r and the Bose-Einstein ideal gas in d = -2r dimensions, with the critical polymerization corresponding to the Bose-Einstein condensation.

  15. Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method

    NASA Astrophysics Data System (ADS)

    Asavaskulkiet, Krissada

    2018-04-01

    In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.

  16. Hájek-Rényi inequality for m-asymptotically almost negatively associated random vectors in Hilbert space and applications.

    PubMed

    Ko, Mi-Hwa

    2018-01-01

    In this paper, we obtain the Hájek-Rényi inequality and, as an application, we study the strong law of large numbers for H -valued m -asymptotically almost negatively associated random vectors with mixing coefficients [Formula: see text] such that [Formula: see text].

  17. Improved dynamic analysis method using load-dependent Ritz vectors

    NASA Technical Reports Server (NTRS)

    Escobedo-Torres, J.; Ricles, J. M.

    1993-01-01

    The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.

  18. Energy theorem for (2+1)-dimensional gravity.

    NASA Astrophysics Data System (ADS)

    Menotti, P.; Seminara, D.

    1995-05-01

    We prove a positive energy theorem in (2+1)-dimensional gravity for open universes and any matter energy-momentum tensor satisfying the dominant energy condition. We consider on the space-like initial value surface a family of widening Wilson loops and show that the energy-momentum of the enclosed subsystem is a future directed time-like vector whose mass is an increasing function of the loop, until it reaches the value 1/4G corresponding to a deficit angle of 2π. At this point the energy-momentum of the system evolves, depending on the nature of a zero norm vector appearing in the evolution equations, either into a time-like vector of a universe which closes kinematically or into a Gott-like universe whose energy momentum vector, as first recognized by Deser, Jackiw, and 't Hooft (1984) is space-like. This treatment generalizes results obtained by Carroll, Fahri, Guth, and Olum (1994) for a system of point-like spinless particle, to the most general form of matter whose energy-momentum tensor satisfies the dominant energy condition. The treatment is also given for the anti-de Sitter (2+1)-dimensional gravity.

  19. Light weakly coupled axial forces: models, constraints, and projections

    DOE PAGES

    Kahn, Yonatan; Krnjaic, Gordan; Mishra-Sharma, Siddharth; ...

    2017-05-01

    Here, we investigate the landscape of constraints on MeV-GeV scale, hidden U(1) forces with nonzero axial-vector couplings to Standard Model fermions. While the purely vector-coupled dark photon, which may arise from kinetic mixing, is a well-motivated scenario, several MeV-scale anomalies motivate a theory with axial couplings which can be UV-completed consistent with Standard Model gauge invariance. Moreover, existing constraints on dark photons depend on products of various combinations of axial and vector couplings, making it difficult to isolate the e ects of axial couplings for particular flavors of SM fermions. We present a representative renormalizable, UV-complete model of a darkmore » photon with adjustable axial and vector couplings, discuss its general features, and show how some UV constraints may be relaxed in a model with nonrenormalizable Yukawa couplings at the expense of fine-tuning. We survey the existing parameter space and the projected reach of planned experiments, brie y commenting on the relevance of the allowed parameter space to low-energy anomalies in π 0 and 8Be* decay.« less

  20. Discriminant analysis for fast multiclass data classification through regularized kernel function approximation.

    PubMed

    Ghorai, Santanu; Mukherjee, Anirban; Dutta, Pranab K

    2010-06-01

    In this brief we have proposed the multiclass data classification by computationally inexpensive discriminant analysis through vector-valued regularized kernel function approximation (VVRKFA). VVRKFA being an extension of fast regularized kernel function approximation (FRKFA), provides the vector-valued response at single step. The VVRKFA finds a linear operator and a bias vector by using a reduced kernel that maps a pattern from feature space into the low dimensional label space. The classification of patterns is carried out in this low dimensional label subspace. A test pattern is classified depending on its proximity to class centroids. The effectiveness of the proposed method is experimentally verified and compared with multiclass support vector machine (SVM) on several benchmark data sets as well as on gene microarray data for multi-category cancer classification. The results indicate the significant improvement in both training and testing time compared to that of multiclass SVM with comparable testing accuracy principally in large data sets. Experiments in this brief also serve as comparison of performance of VVRKFA with stratified random sampling and sub-sampling.

Top