Extensions of the standard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramond, P.
1983-01-01
In these lectures we focus on several issues that arise in theoretical extensions of the standard model. First we describe the kinds of fermions that can be added to the standard model without affecting known phenomenology. We focus in particular on three types: the vector-like completion of the existing fermions as would be predicted by a Kaluza-Klein type theory, which we find cannot be realistically achieved without some chiral symmetry; fermions which are vector-like by themselves, such as do appear in supersymmetric extensions, and finally anomaly-free chiral sets of fermions. We note that a chiral symmetry, such as the Peccei-Quinnmore » symmetry can be used to produce a vector-like theory which, at scales less than M/sub W/, appears to be chiral. Next, we turn to the analysis of the second hierarchy problem which arises in Grand Unified extensions of the standard model, and plays a crucial role in proton decay of supersymmetric extensions. We review the known mechanisms for avoiding this problem and present a new one which seems to lead to the (family) triplication of the gauge group. Finally, this being a summer school, we present a list of homework problems. 44 references.« less
Optimum instantaneous impulsive orbital injection to attain a specified asymptotic velocity vector.
NASA Technical Reports Server (NTRS)
Bean, W. C.
1971-01-01
A nalysis of the necessary conditions of Battin for instantaneous orbital injection, with consideration of the uniqueness of his solution, and of the further problem which arises in the degenerate case when radius vector and asymptotic vector are separated by 180 deg. It is shown that when the angular separation between radius vector and asymptotic velocity vector satisfies theta not equal to 180 deg, there are precisely two insertion-velocity vectors which permit attainment of the target asymptotic velocity vector, one yielding posigrade, the other retrograde motion. When theta equals to 180 deg, there is a family of insertion-velocity vectors which permit attainment of a specified asymptotic velocity vector with a unique insertion-velocity vector for every arbitrary orientation of a target unit angular momentum vector.
Fast angular synchronization for phase retrieval via incomplete information
NASA Astrophysics Data System (ADS)
Viswanathan, Aditya; Iwen, Mark
2015-08-01
We consider the problem of recovering the phase of an unknown vector, x ∈ ℂd, given (normalized) phase difference measurements of the form xjxk*/|xjxk*|, j,k ∈ {1,...,d}, and where xj* denotes the complex conjugate of xj. This problem is sometimes referred to as the angular synchronization problem. This paper analyzes a linear-time-in-d eigenvector-based angular synchronization algorithm and studies its theoretical and numerical performance when applied to a particular class of highly incomplete and possibly noisy phase difference measurements. Theoretical results are provided for perfect (noiseless) measurements, while numerical simulations demonstrate the robustness of the method to measurement noise. Finally, we show that this angular synchronization problem and the specific form of incomplete phase difference measurements considered arise in the phase retrieval problem - where we recover an unknown complex vector from phaseless (or magnitude) measurements.
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
Stochastic species abundance models involving special copulas
NASA Astrophysics Data System (ADS)
Huillet, Thierry E.
2018-01-01
Copulas offer a very general tool to describe the dependence structure of random variables supported by the hypercube. Inspired by problems of species abundances in Biology, we study three distinct toy models where copulas play a key role. In a first one, a Marshall-Olkin copula arises in a species extinction model with catastrophe. In a second one, a quasi-copula problem arises in a flagged species abundance model. In a third model, we study completely random species abundance models in the hypercube as those, not of product type, with uniform margins and singular. These can be understood from a singular copula supported by an inflated simplex. An exchangeable singular Dirichlet copula is also introduced, together with its induced completely random species abundance vector.
The Cauchy problem for the Pavlov equation
NASA Astrophysics Data System (ADS)
Grinevich, P. G.; Santini, P. M.; Wu, D.
2015-10-01
Commutation of multidimensional vector fields leads to integrable nonlinear dispersionless PDEs that arise in various problems of mathematical physics and have been intensively studied in recent literature. This report aims to solve the scattering and inverse scattering problem for integrable dispersionless PDEs, recently introduced just at a formal level, concentrating on the prototypical example of the Pavlov equation, and to justify an existence theorem for global bounded solutions of the associated Cauchy problem with small data. An essential part of this work was made during the visit of the three authors to the Centro Internacional de Ciencias in Cuernavaca, Mexico in November-December 2012.
Machine Learning Methods for Attack Detection in the Smart Grid.
Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent
2016-08-01
Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.
2016-09-13
lems arising, for example, after discretization of optimal control problems. Lucien developed a general framework for quantifying near-optimality...Polak, E., Da Cunha, N.O.: Constrainedminimization under vector valued-criteria in finite dimensional spaces. J. Math . Anal. Appl. 19(1), 103–124...1969) 12. Pironneau, O., Polak, E.: On the rate of convergence of certain methods of centers. Math . Program. 2(2), 230–258 (1972) 13. Polak, E., Sargent
A comparison of in situ measurements of vector-E and - vector-V x vector-B from Dynamics Explorer 2
NASA Technical Reports Server (NTRS)
Hanson, W. B.; Coley, W. R.; Heelis, R. A.; Maynard, N. C.; Aggson, T. L.
1993-01-01
Dynamics Explorer-2 provided the first opportunity to make a direct comparison of in situ measurements of the high-latitude convection electric field by two distinctly different techniques. The vector electric field instrument (VEFI) used antennae to measure the intrinsic electric fields and the ion drift meter (IDM) and retarding potential analyzer (RPA) measured the ion drift velocity vector, from which the convection electric field can be deduced. The data from three orbits having large electric fields at high latitude are presented, one at high, one at medium, and one at low altitudes. The general agreement between the two measurements of electric field is very good, with typical differences at high latitudes of the order of a few millivolts per meter, but there are some regions where the particle fluxes are extremely large (e.g., the cusp) and the disagreement is worse, probably because of IDM difficulties. The auroral zone potential patterns derived from the two devices are in excellent agreement for two of the cases, but not in the third, where bad attitude data may be the problem. At low latitudes there are persistent differences in the measurements of a few millivolts per meter, though these differences are quite constant from orbit to orbit. This problem seems to arise from some shortcoming in the VEFI measurments. Overall, however, these measurements confirm the concept of `frozen-in' plasma that drifts with velocity vector-E x vector-B/B(exp 2) within the measurement errors of the two techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Ambra, P.; Vassilevski, P. S.
2014-05-30
Adaptive Algebraic Multigrid (or Multilevel) Methods (αAMG) are introduced to improve robustness and efficiency of classical algebraic multigrid methods in dealing with problems where no a-priori knowledge or assumptions on the near-null kernel of the underlined matrix are available. Recently we proposed an adaptive (bootstrap) AMG method, αAMG, aimed to obtain a composite solver with a desired convergence rate. Each new multigrid component relies on a current (general) smooth vector and exploits pairwise aggregation based on weighted matching in a matrix graph to define a new automatic, general-purpose coarsening process, which we refer to as “the compatible weighted matching”. Inmore » this work, we present results that broaden the applicability of our method to different finite element discretizations of elliptic PDEs. In particular, we consider systems arising from displacement methods in linear elasticity problems and saddle-point systems that appear in the application of the mixed method to Darcy problems.« less
Quantum algorithm for linear systems of equations.
Harrow, Aram W; Hassidim, Avinatan; Lloyd, Seth
2009-10-09
Solving linear systems of equations is a common problem that arises both on its own and as a subroutine in more complex problems: given a matrix A and a vector b(-->), find a vector x(-->) such that Ax(-->) = b(-->). We consider the case where one does not need to know the solution x(-->) itself, but rather an approximation of the expectation value of some operator associated with x(-->), e.g., x(-->)(dagger) Mx(-->) for some matrix M. In this case, when A is sparse, N x N and has condition number kappa, the fastest known classical algorithms can find x(-->) and estimate x(-->)(dagger) Mx(-->) in time scaling roughly as N square root(kappa). Here, we exhibit a quantum algorithm for estimating x(-->)(dagger) Mx(-->) whose runtime is a polynomial of log(N) and kappa. Indeed, for small values of kappa [i.e., poly log(N)], we prove (using some common complexity-theoretic assumptions) that any classical algorithm for this problem generically requires exponentially more time than our quantum algorithm.
Multitasking the Davidson algorithm for the large, sparse eigenvalue problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umar, V.M.; Fischer, C.F.
1989-01-01
The authors report how the Davidson algorithm, developed for handling the eigenvalue problem for large and sparse matrices arising in quantum chemistry, was modified for use in atomic structure calculations. To date these calculations have used traditional eigenvalue methods, which limit the range of feasible calculations because of their excessive memory requirements and unsatisfactory performance attributed to time-consuming and costly processing of zero valued elements. The replacement of a traditional matrix eigenvalue method by the Davidson algorithm reduced these limitations. Significant speedup was found, which varied with the size of the underlying problem and its sparsity. Furthermore, the range ofmore » matrix sizes that can be manipulated efficiently was expended by more than one order or magnitude. On the CRAY X-MP the code was vectorized and the importance of gather/scatter analyzed. A parallelized version of the algorithm obtained an additional 35% reduction in execution time. Speedup due to vectorization and concurrency was also measured on the Alliant FX/8.« less
Duality-symmetric supersymmetric Yang-Mills theory in three dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishino, Hitoshi; Rajpoot, Subhash
We formulate a duality-symmetric N=1 supersymmetric Yang-Mills theory in three dimensions. Our field content is (A{sub {mu}}{sup I},{lambda}{sup I},{phi}{sup I}), where the index I is for the adjoint representation of an arbitrary gauge group G. Our Hodge duality symmetry is F{sub {mu}{nu}}{sup I}=+{epsilon}{sub {mu}{nu}}{sup {rho}D}{sub {rho}{phi}}{sup I}. Because of this relationship, the presence of two physical fields A{sub {mu}}{sup I} and {phi}{sup I} within the same N=1 supermultiplet poses no problem. We can couple this multiplet to another vector multiplet (C{sub {mu}}{sup I},{chi}{sup I};B{sub {mu}{nu}}{sup I}) with 1+1 physical degrees of freedom modulo dim G. Thanks to peculiar couplings andmore » supersymmetry, the usual problem with an extra vector field in a nontrivial representation does not arise in our system.« less
The Application of a Technique for Vector Correlation to Problems in Meteorology and Oceanography.
NASA Astrophysics Data System (ADS)
Breaker, L. C.; Gemmill, W. H.; Crosby, D. S.
1994-11-01
In a recent study, Crosby et al. proposed a definition for vector correlation that has not been commonly used in meteorology or oceanography. This definition has both a firm theoretical basis and a rather complete set of desirable statistical properties. In this study, the authors apply the definition to practical problems arising in meteorology and oceanography. In the first of two case studies, vector correlations were calculated between subsurface currents for five locations along the southeastern shore of Lake Erie. Vector correlations for one sample size were calculated for all current meter combinations, first including the seiche frequency and then with the seiche frequency removed. Removal of the seiche frequency, which was easily detected in the current spectra, had only a small effect on the vector correlations. Under reasonable assumptions, the vector correlations were in most cases statistically significant and revealed considerable fine structure in the vector correlation sequences. In some cases, major variations in vector correlation coincided with changes in surface wind. The vector correlations for the various current meter combinations decreased rapidly with increasing spatial separation. For one current meter combination, canonical correlations were also calculated; the first canonical correlation tended to retain the underlying trend, whereas the second canonical correlation retained the peaks in the vector correlations.In the second case study, vector correlations were calculated between marine surface winds derived from the National Meteorological Center's Global Data Assimilation System and observed winds acquired from the network of National Data Buoy Center buoys that are located off the continental United States and in the Gulf of Alaska. Results of this comparison indicated that 1) there was a significant decrease in correlation between the predicted and observed winds with increasing forecast interval out to 72 h, 2) the technique provides a sensitive indicator for detecting bad buoy reports, and 3) there was no obvious seasonal cycle in the monthly vector correlations for the period of observation.
NASA Astrophysics Data System (ADS)
Beck, L.; Wood, B.; Whitney, S.; Rossi, R.; Spanner, M.; Rodriguez, M.; Rodriguez-Ramirez, A.; Salute, J.; Legters, L.; Roberts, D.; Rejmankova, E.; Washino, R.
1993-08-01
This paper describes a procedure whereby remote sensing and geographic information system (GIS) technologies are used in a sample design to study the habitat of Anopheles albimanus, one of the principle vectors of malaria in Central America. This procedure incorporates Landsat-derived land cover maps with digital elevation and road network data to identify a random selection of larval habitats accessible for field sampling. At the conclusion of the sampling season, the larval counts will be used to determine habitat productivity, and then integrated with information on human settlement to assess where people are at high risk of malaria. This aproach would be appropriate in areas where land cover information is lacking and problems of access constrain field sampling. The use of a GIS also permits other data (such as insecticide spraying data) to the incorporated in the sample design as they arise. This approach would also be pertinent for other tropical vector-borne diseases, particularly where human activities impact disease vector habitat.
Explicitly covariant dispersion relations and self-induced transparency
NASA Astrophysics Data System (ADS)
Mahajan, S. M.; Asenjo, Felipe A.
2017-02-01
Explicitly covariant dispersion relations for a variety of plasma waves in unmagnetized and magnetized plasmas are derived in a systematic manner from a fully covariant plasma formulation. One needs to invoke relatively little known invariant combinations constructed from the ambient electromagnetic fields and the wave vector to accomplish the program. The implication of this work applied to the self-induced transparency effect is discussed. Some problems arising from the inconsistent use of relativity are pointed out.
Spin reorientation of a nonsymmetric body with energy dissipation
NASA Technical Reports Server (NTRS)
Cenker, R. J.
1973-01-01
Stable rotating semi-rigid bodies were demonstrated analytically, and verified in flights such as Explorer 1 and ATS-5 satellites. The problem arises from the two potential orientations which the final spin vector can take after large angle reorientation from minor to major axis, i.e., along the positive or negative axis of the maximum inertia. Reorientation of a satellite initially spinning about the minor axis using an energy dissipation device may require that the final spin orientation be controlled. Examples of possible applications are the Apogee Motor Assembly with Paired Satellites (AMAPS) configuration, where proper orientation of the thruster is required; and reorientation of ATS-5, where the spin sensitive nature of the despin device (yo-yo mechanism) requires that the final spin vector point is a specified direction.
NASA Technical Reports Server (NTRS)
Gentzsch, W.
1982-01-01
Problems which can arise with vector and parallel computers are discussed in a user oriented context. Emphasis is placed on the algorithms used and the programming techniques adopted. Three recently developed supercomputers are examined and typical application examples are given in CRAY FORTRAN, CYBER 205 FORTRAN and DAP (distributed array processor) FORTRAN. The systems performance is compared. The addition of parts of two N x N arrays is considered. The influence of the architecture on the algorithms and programming language is demonstrated. Numerical analysis of magnetohydrodynamic differential equations by an explicit difference method is illustrated, showing very good results for all three systems. The prognosis for supercomputer development is assessed.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1991-01-01
Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.
On Takens’ last problem: tangencies and time averages near heteroclinic networks
NASA Astrophysics Data System (ADS)
Labouriau, Isabel S.; Rodrigues, Alexandre A. P.
2017-05-01
We obtain a structurally stable family of smooth ordinary differential equations exhibiting heteroclinic tangencies for a dense subset of parameters. We use this to find vector fields C 2-close to an element of the family exhibiting a tangency, for which the set of solutions with historic behaviour contains an open set. This provides an affirmative answer to Takens’ last problem (Takens 2008 Nonlinearity 21 T33-6). A limited solution with historic behaviour is one for which the time averages do not converge as time goes to infinity. Takens’ problem asks for dynamical systems where historic behaviour occurs persistently for initial conditions in a set with positive Lebesgue measure. The family appears in the unfolding of a degenerate differential equation whose flow has an asymptotically stable heteroclinic cycle involving two-dimensional connections of non-trivial periodic solutions. We show that the degenerate problem also has historic behaviour, since for an open set of initial conditions starting near the cycle, the time averages approach the boundary of a polygon whose vertices depend on the centres of gravity of the periodic solutions and their Floquet multipliers. We illustrate our results with an explicit example where historic behaviour arises C 2-close of a \\mathbf{SO}(2) -equivariant vector field.
Boundary Concentration for Eigenvalue Problems Related to the Onset of Superconductivity
NASA Astrophysics Data System (ADS)
del Pino, Manuel; Felmer, Patricio L.; Sternberg, Peter
We examine the asymptotic behavior of the eigenvalue μ(h) and corresponding eigenfunction associated with the variational problem
Black hole perturbations in vector-tensor theories: the odd-mode analysis
NASA Astrophysics Data System (ADS)
Kase, Ryotaro; Minamitsuji, Masato; Tsujikawa, Shinji; Zhang, Ying-li
2018-02-01
In generalized Proca theories with vector-field derivative couplings, a bunch of hairy black hole solutions have been derived on a static and spherically symmetric background. In this paper, we formulate the odd-parity black hole perturbations in generalized Proca theories by expanding the corresponding action up to second order and investigate whether or not black holes with vector hair suffer ghost or Laplacian instabilities. We show that the models with cubic couplings G3(X), where X=‑AμAμ/2 with a vector field Aμ, do not provide any additional stability condition as in General Relativity. On the other hand, the exact charged stealth Schwarzschild solution with a nonvanishing longitudinal vector component A1, which originates from the coupling to the Einstein tensor GμνAμ Aν equivalent to the quartic coupling G4(X) containing a linear function of X, is unstable in the vicinity of the event horizon. The same instability problem also persists for hairy black holes arising from general quartic power-law couplings G4(X) ⊃ β4 Xn with the nonvanishing A1, while the other branch with A1=0 can be consistent with conditions for the absence of ghost and Laplacian instabilities. We also discuss the case of other exact and numerical black hole solutions associated with intrinsic vector-field derivative couplings and show that there exists a wide range of parameter spaces in which the solutions suffer neither ghost nor Laplacian instabilities against odd-parity perturbations.
NASA Astrophysics Data System (ADS)
Mitri, Farid G.
2018-01-01
Generalized solutions of vector Airy light-sheets, adjustable per their derivative order m, are introduced stemming from the Lorenz gauge condition and Maxwell's equations using the angular spectrum decomposition method. The Cartesian components of the incident radiated electric, magnetic and time-averaged Poynting vector fields in free space (excluding evanescent waves) are determined and computed with particular emphasis on the derivative order of the Airy light-sheet and the polarization on the magnetic vector potential forming the beam. Negative transverse time-averaged Poynting vector components can arise, while the longitudinal counterparts are always positive. Moreover, the analysis is extended to compute the optical radiation force and spin torque vector components on a lossless dielectric prolate subwavelength spheroid in the framework of the electric dipole approximation. The results show that negative forces and spin torques sign reversal arise depending on the derivative order of the beam, the polarization of the magnetic vector potential, and the orientation of the subwavelength prolate spheroid in space. The spin torque sign reversal suggests that counter-clockwise or clockwise rotations around the center of mass of the subwavelength spheroid can occur. The results find useful applications in single Airy light-sheet tweezers, particle manipulation, handling, and rotation applications to name a few examples.
The limitations of staggered grid finite differences in plasticity problems
NASA Astrophysics Data System (ADS)
Pranger, Casper; Herrendörfer, Robert; Le Pourhiet, Laetitia
2017-04-01
Most crustal-scale applications operate at grid sizes much larger than those at which plasticity occurs in nature. As a consequence, plastic shear bands often localize to the scale of one grid cell, and numerical ploys — like introducing an artificial length scale — are needed to counter this. If for whatever reasons (good or bad) this is not done, we find that problems may arise due to the fact that in the staggered grid finite difference discretization, unknowns like components of the stress tensor and velocity vector are located in physically different positions. This incurs frequent interpolation, reducing the accuracy of the discretization. For purely stress-dependent plasticity problems the adverse effects might be contained because the magnitude of the stress discontinuity across a plastic shear band is limited. However, we find that when rate-dependence of friction is added in the mix, things become ugly really fast and the already hard-to-solve and highly nonlinear problem of plasticity incurs an extra penalty.
Tunneling and speedup in quantum optimization for permutation-symmetric problems
Muthukrishnan, Siddharth; Albash, Tameem; Lidar, Daniel A.
2016-07-21
Tunneling is often claimed to be the key mechanism underlying possible speedups in quantum optimization via quantum annealing (QA), especially for problems featuring a cost function with tall and thin barriers. We present and analyze several counterexamples from the class of perturbed Hamming weight optimization problems with qubit permutation symmetry. We first show that, for these problems, the adiabatic dynamics that make tunneling possible should be understood not in terms of the cost function but rather the semiclassical potential arising from the spin-coherent path-integral formalism. We then provide an example where the shape of the barrier in the final costmore » function is short and wide, which might suggest no quantum advantage for QA, yet where tunneling renders QA superior to simulated annealing in the adiabatic regime. However, the adiabatic dynamics turn out not be optimal. Instead, an evolution involving a sequence of diabatic transitions through many avoided-level crossings, involving no tunneling, is optimal and outperforms adiabatic QA. We show that this phenomenon of speedup by diabatic transitions is not unique to this example, and we provide an example where it provides an exponential speedup over adiabatic QA. In yet another twist, we show that a classical algorithm, spin-vector dynamics, is at least as efficient as diabatic QA. Lastly, in a different example with a convex cost function, the diabatic transitions result in a speedup relative to both adiabatic QA with tunneling and classical spin-vector dynamics.« less
Tunneling and speedup in quantum optimization for permutation-symmetric problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muthukrishnan, Siddharth; Albash, Tameem; Lidar, Daniel A.
Tunneling is often claimed to be the key mechanism underlying possible speedups in quantum optimization via quantum annealing (QA), especially for problems featuring a cost function with tall and thin barriers. We present and analyze several counterexamples from the class of perturbed Hamming weight optimization problems with qubit permutation symmetry. We first show that, for these problems, the adiabatic dynamics that make tunneling possible should be understood not in terms of the cost function but rather the semiclassical potential arising from the spin-coherent path-integral formalism. We then provide an example where the shape of the barrier in the final costmore » function is short and wide, which might suggest no quantum advantage for QA, yet where tunneling renders QA superior to simulated annealing in the adiabatic regime. However, the adiabatic dynamics turn out not be optimal. Instead, an evolution involving a sequence of diabatic transitions through many avoided-level crossings, involving no tunneling, is optimal and outperforms adiabatic QA. We show that this phenomenon of speedup by diabatic transitions is not unique to this example, and we provide an example where it provides an exponential speedup over adiabatic QA. In yet another twist, we show that a classical algorithm, spin-vector dynamics, is at least as efficient as diabatic QA. Lastly, in a different example with a convex cost function, the diabatic transitions result in a speedup relative to both adiabatic QA with tunneling and classical spin-vector dynamics.« less
Implicit solvers for unstructured meshes
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Mavriplis, Dimitri J.
1991-01-01
Implicit methods for unstructured mesh computations are developed and tested. The approximate system which arises from the Newton-linearization of the nonlinear evolution operator is solved by using the preconditioned generalized minimum residual technique. These different preconditioners are investigated: the incomplete LU factorization (ILU), block diagonal factorization, and the symmetric successive over-relaxation (SSOR). The preconditioners have been optimized to have good vectorization properties. The various methods are compared over a wide range of problems. Ordering of the unknowns, which affects the convergence of these sparse matrix iterative methods, is also investigated. Results are presented for inviscid and turbulent viscous calculations on single and multielement airfoil configurations using globally and adaptively generated meshes.
Directional Statistics for Polarization Observations of Individual Pulses from Radio Pulsars
NASA Astrophysics Data System (ADS)
McKinnon, M. M.
2010-10-01
Radio polarimetry is a three-dimensional statistical problem. The three-dimensional aspect of the problem arises from the Stokes parameters Q, U, and V, which completely describe the polarization of electromagnetic radiation and conceptually define the orientation of a polarization vector in the Poincaré sphere. The statistical aspect of the problem arises from the random fluctuations in the source-intrinsic polarization and the instrumental noise. A simple model for the polarization of pulsar radio emission has been used to derive the three-dimensional statistics of radio polarimetry. The model is based upon the proposition that the observed polarization is due to the incoherent superposition of two, highly polarized, orthogonal modes. The directional statistics derived from the model follow the Bingham-Mardia and Fisher family of distributions. The model assumptions are supported by the qualitative agreement between the statistics derived from it and those measured with polarization observations of the individual pulses from pulsars. The orthogonal modes are thought to be the natural modes of radio wave propagation in the pulsar magnetosphere. The intensities of the modes become statistically independent when generalized Faraday rotation (GFR) in the magnetosphere causes the difference in their phases to be large. A stochastic version of GFR occurs when fluctuations in the phase difference are also large, and may be responsible for the more complicated polarization patterns observed in pulsar radio emission.
NASA Astrophysics Data System (ADS)
Burban, Igor; Galinat, Lennart; Stolin, Alexander
2017-11-01
In this paper we study the combinatorics of quasi-trigonometric solutions of the classical Yang-Baxter equation, arising from simple vector bundles on a nodal Weierstraß cubic. Dedicated to the memory of Petr Petrovich Kulish.
NASA Astrophysics Data System (ADS)
Chen, Jiangwei; Dai, Yuyao; Yan, Lin; Zhao, Huimin
2018-04-01
In this paper, we shall demonstrate theoretically that steady bound electromagnetic eigenstate can arise in an infinite homogeneous isotropic linear metamaterial with zero-real-part-of-impedance and nonzero-imaginary-part-of-wave-vector, which is partly attributed to that, here, nonzero-imaginary-part-of-wave-vector is not involved with energy losses or gain. Altering value of real-part-of-impedance of the metamaterial, the bound electromagnetic eigenstate may become to be a progressive wave. Our work may be useful to further understand energy conversion and conservation properties of electromagnetic wave in the dispersive and absorptive medium and provides a feasible route to stop, store and release electromagnetic wave (light) conveniently by using metamaterial with near-zero-real-part-of-impedance.
Vector optical activity in the Weyl semimetal TaAs
Norman, M. R.
2015-12-15
Here, it is shown that the Weyl semimetal TaAs can have a significant polar vector contribution to its optical activity. This is quantified by ab initio calculations of the resonant x-ray diffraction at the Ta L1 edge. For the Bragg vector (400), this polar vector contribution to the circular intensity differential between left and right polarized x-rays is predicted to be comparable to that arising from linear dichroism. Implications this result has in regards to optical effects predicted for topological Weyl semimetals are discussed.
A Note on the Application of the Extended Bernoulli Equation
1999-02-01
as OV s ... - Vp „ _ = -±L L + VO , (2) Dt p where DIDt denotes the material derivative (discussed in following section); V is the vector...force potential; V is the vector gradient operator; s (J is the deviatoric-stress tensor arising from any type of elasto-viscoplastic constitutive...behavior; and s ^j is index notation for dsy/dxp denoting the following vector condensation of the deviatoric-stress tensor: ds ds ds
On extreme points of the diffusion polytope
Hay, M. J.; Schiff, J.; Fisch, N. J.
2017-01-04
Here, we consider a class of diffusion problems defined on simple graphs in which the populations at any two vertices may be averaged if they are connected by an edge. The diffusion polytope is the convex hull of the set of population vectors attainable using finite sequences of these operations. A number of physical problems have linear programming solutions taking the diffusion polytope as the feasible region, e.g. the free energy that can be removed from plasma using waves, so there is a need to describe and enumerate its extreme points. We also review known results for the case ofmore » the complete graph Kn, and study a variety of problems for the path graph Pn and the cyclic graph Cn. Finall, we describe the different kinds of extreme points that arise, and identify the diffusion polytope in a number of simple cases. In the case of increasing initial populations on Pn the diffusion polytope is topologically an n-dimensional hypercube.« less
An automatic multigrid method for the solution of sparse linear systems
NASA Technical Reports Server (NTRS)
Shapira, Yair; Israeli, Moshe; Sidi, Avram
1993-01-01
An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.
ML 3.0 smoothed aggregation user's guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sala, Marzio; Hu, Jonathan Joseph; Tuminaro, Raymond Stephen
2004-05-01
ML is a multigrid preconditioning package intended to solve linear systems of equations Az = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package ormore » to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the AZTEC 2.1 and AZTECOO iterative package [15]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and non-symmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.« less
ML 3.1 smoothed aggregation user's guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sala, Marzio; Hu, Jonathan Joseph; Tuminaro, Raymond Stephen
2004-10-01
ML is a multigrid preconditioning package intended to solve linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package ormore » to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the Aztec 2.1 and AztecOO iterative package [16]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and nonsymmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.« less
Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains
NASA Astrophysics Data System (ADS)
Koulouri, Alexandra; Brookes, Mike; Rimpiläinen, Ville
2017-01-01
In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In this paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field.
Kac determinant and singular vector of the level N representation of Ding-Iohara-Miki algebra
NASA Astrophysics Data System (ADS)
Ohkubo, Yusuke
2018-05-01
In this paper, we obtain the formula for the Kac determinant of the algebra arising from the level N representation of the Ding-Iohara-Miki algebra. It is also discovered that its singular vectors correspond to generalized Macdonald functions (the q-deformed version of the AFLT basis).
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1975-01-01
Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.
Nonlinear programming for classification problems in machine learning
NASA Astrophysics Data System (ADS)
Astorino, Annabella; Fuduli, Antonio; Gaudioso, Manlio
2016-10-01
We survey some nonlinear models for classification problems arising in machine learning. In the last years this field has become more and more relevant due to a lot of practical applications, such as text and web classification, object recognition in machine vision, gene expression profile analysis, DNA and protein analysis, medical diagnosis, customer profiling etc. Classification deals with separation of sets by means of appropriate separation surfaces, which is generally obtained by solving a numerical optimization model. While linear separability is the basis of the most popular approach to classification, the Support Vector Machine (SVM), in the recent years using nonlinear separating surfaces has received some attention. The objective of this work is to recall some of such proposals, mainly in terms of the numerical optimization models. In particular we tackle the polyhedral, ellipsoidal, spherical and conical separation approaches and, for some of them, we also consider the semisupervised versions.
Abstract generalized vector quasi-equilibrium problems in noncompact Hadamard manifolds.
Lu, Haishu; Wang, Zhihua
2017-01-01
This paper deals with the abstract generalized vector quasi-equilibrium problem in noncompact Hadamard manifolds. We prove the existence of solutions to the abstract generalized vector quasi-equilibrium problem under suitable conditions and provide applications to an abstract vector quasi-equilibrium problem, a generalized scalar equilibrium problem, a scalar equilibrium problem, and a perturbed saddle point problem. Finally, as an application of the existence of solutions to the generalized scalar equilibrium problem, we obtain a weakly mixed variational inequality and two mixed variational inequalities. The results presented in this paper unify and generalize many known results in the literature.
A conservative scheme for electromagnetic simulation of magnetized plasmas with kinetic electrons
NASA Astrophysics Data System (ADS)
Bao, J.; Lin, Z.; Lu, Z. X.
2018-02-01
A conservative scheme has been formulated and verified for gyrokinetic particle simulations of electromagnetic waves and instabilities in magnetized plasmas. An electron continuity equation derived from the drift kinetic equation is used to time advance the electron density perturbation by using the perturbed mechanical flow calculated from the parallel vector potential, and the parallel vector potential is solved by using the perturbed canonical flow from the perturbed distribution function. In gyrokinetic particle simulations using this new scheme, the shear Alfvén wave dispersion relation in the shearless slab and continuum damping in the sheared cylinder have been recovered. The new scheme overcomes the stringent requirement in the conventional perturbative simulation method that perpendicular grid size needs to be as small as electron collisionless skin depth even for the long wavelength Alfvén waves. The new scheme also avoids the problem in the conventional method that an unphysically large parallel electric field arises due to the inconsistency between electrostatic potential calculated from the perturbed density and vector potential calculated from the perturbed canonical flow. Finally, the gyrokinetic particle simulations of the Alfvén waves in sheared cylinder have superior numerical properties compared with the fluid simulations, which suffer from numerical difficulties associated with singular mode structures.
Interacting vector fields in relativity without relativity
NASA Astrophysics Data System (ADS)
Anderson, Edward; Barbour, Julian
2002-06-01
Barbour, Foster and Ó Murchadha have recently developed a new framework, called here the 3-space approach, for the formulation of classical bosonic dynamics. Neither time nor a locally Minkowskian structure of spacetime are presupposed. Both arise as emergent features of the world from geodesic-type dynamics on a space of three-dimensional metric-matter configurations. In fact gravity, the universal light-cone and Abelian gauge theory minimally coupled to gravity all arise naturally through a single common mechanism. It yields relativity - and more - without presupposing relativity. This paper completes the recovery of the presently known bosonic sector within the 3-space approach. We show, for a rather general ansatz, that 3-vector fields can interact among themselves only as Yang-Mills fields minimally coupled to gravity.
Asymptotic approximations to posterior distributions via conditional moment equations
Yee, J.L.; Johnson, W.O.; Samaniego, F.J.
2002-01-01
We consider asymptotic approximations to joint posterior distributions in situations where the full conditional distributions referred to in Gibbs sampling are asymptotically normal. Our development focuses on problems where data augmentation facilitates simpler calculations, but results hold more generally. Asymptotic mean vectors are obtained as simultaneous solutions to fixed point equations that arise naturally in the development. Asymptotic covariance matrices flow naturally from the work of Arnold & Press (1989) and involve the conditional asymptotic covariance matrices and first derivative matrices for conditional mean functions. When the fixed point equations admit an analytical solution, explicit formulae are subsequently obtained for the covariance structure of the joint limiting distribution, which may shed light on the use of the given statistical model. Two illustrations are given. ?? 2002 Biometrika Trust.
Implicit solvers for unstructured meshes
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Mavriplis, Dimitri J.
1991-01-01
Implicit methods were developed and tested for unstructured mesh computations. The approximate system which arises from the Newton linearization of the nonlinear evolution operator is solved by using the preconditioned GMRES (Generalized Minimum Residual) technique. Three different preconditioners were studied, namely, the incomplete LU factorization (ILU), block diagonal factorization, and the symmetric successive over relaxation (SSOR). The preconditioners were optimized to have good vectorization properties. SSOR and ILU were also studied as iterative schemes. The various methods are compared over a wide range of problems. Ordering of the unknowns, which affects the convergence of these sparse matrix iterative methods, is also studied. Results are presented for inviscid and turbulent viscous calculations on single and multielement airfoil configurations using globally and adaptively generated meshes.
Vectorization and parallelization of the finite strip method for dynamic Mindlin plate problems
NASA Technical Reports Server (NTRS)
Chen, Hsin-Chu; He, Ai-Fang
1993-01-01
The finite strip method is a semi-analytical finite element process which allows for a discrete analysis of certain types of physical problems by discretizing the domain of the problem into finite strips. This method decomposes a single large problem into m smaller independent subproblems when m harmonic functions are employed, thus yielding natural parallelism at a very high level. In this paper we address vectorization and parallelization strategies for the dynamic analysis of simply-supported Mindlin plate bending problems and show how to prevent potential conflicts in memory access during the assemblage process. The vector and parallel implementations of this method and the performance results of a test problem under scalar, vector, and vector-concurrent execution modes on the Alliant FX/80 are also presented.
Choi, Soo -Min; Hochberg, Yonit; Kuflik, Eric; ...
2017-10-24
Strongly Interacting Massive Particles (SIMPs) have recently been proposed as light thermal dark matter relics. Here we consider an explicit realization of the SIMP mechanism in the form of vector SIMPs arising from an SU(2) X hidden gauge theory, where the accidental custodial symmetry protects the stability of the dark matter. We propose several ways of equilibrating the dark and visible sectors in this setup. In particular, we show that a light dark Higgs portal can maintain thermal equilibrium between the two sectors, as can a massive dark vector portal with its generalized Chern-Simons couplings to the vector SIMPs, allmore » while remaining consistent with experimental constraints.« less
Dark forces coupled to nonconserved currents
NASA Astrophysics Data System (ADS)
Dror, Jeff A.; Lasenby, Robert; Pospelov, Maxim
2017-10-01
New light vectors with dimension-4 couplings to Standard Model states have (energy/vectormass)2-enhanced production rates unless the current they couple to is conserved. These processes allow us to derive new constraints on the couplings of such vectors, that are significantly stronger than the previous literature for a wide variety of models. Examples include vectors with axial couplings to quarks and vectors coupled to currents (such as baryon number) that are only broken by the chiral anomaly. Our new limits arise from a range of processes, including rare Z decays and flavor-changing meson decays, and rule out a number of phenomenologically motivated proposals.
NASA Astrophysics Data System (ADS)
Choi, Soo-Min; Hochberg, Yonit; Kuflik, Eric; Lee, Hyun Min; Mambrini, Yann; Murayama, Hitoshi; Pierre, Mathias
2017-10-01
Strongly Interacting Massive Particles (SIMPs) have recently been proposed as light thermal dark matter relics. Here we consider an explicit realization of the SIMP mechanism in the form of vector SIMPs arising from an SU(2) X hidden gauge theory, where the accidental custodial symmetry protects the stability of the dark matter. We propose several ways of equilibrating the dark and visible sectors in this setup. In particular, we show that a light dark Higgs portal can maintain thermal equilibrium between the two sectors, as can a massive dark vector portal with its generalized Chern-Simons couplings to the vector SIMPs, all while remaining consistent with experimental constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Soo -Min; Hochberg, Yonit; Kuflik, Eric
Strongly Interacting Massive Particles (SIMPs) have recently been proposed as light thermal dark matter relics. Here we consider an explicit realization of the SIMP mechanism in the form of vector SIMPs arising from an SU(2) X hidden gauge theory, where the accidental custodial symmetry protects the stability of the dark matter. We propose several ways of equilibrating the dark and visible sectors in this setup. In particular, we show that a light dark Higgs portal can maintain thermal equilibrium between the two sectors, as can a massive dark vector portal with its generalized Chern-Simons couplings to the vector SIMPs, allmore » while remaining consistent with experimental constraints.« less
Jorge-Botana, Guillermo; Olmos, Ricardo; León, José Antonio
2009-11-01
There is currently a widespread interest in indexing and extracting taxonomic information from large text collections. An example is the automatic categorization of informally written medical or psychological diagnoses, followed by the extraction of epidemiological information or even terms and structures needed to formulate guiding questions as an heuristic tool for helping doctors. Vector space models have been successfully used to this end (Lee, Cimino, Zhu, Sable, Shanker, Ely & Yu, 2006; Pakhomov, Buntrock & Chute, 2006). In this study we use a computational model known as Latent Semantic Analysis (LSA) on a diagnostic corpus with the aim of retrieving definitions (in the form of lists of semantic neighbors) of common structures it contains (e.g. "storm phobia", "dog phobia") or less common structures that might be formed by logical combinations of categories and diagnostic symptoms (e.g. "gun personality" or "germ personality"). In the quest to bring definitions into line with the meaning of structures and make them in some way representative, various problems commonly arise while recovering content using vector space models. We propose some approaches which bypass these problems, such as Kintsch's (2001) predication algorithm and some corrections to the way lists of neighbors are obtained, which have already been tested on semantic spaces in a non-specific domain (Jorge-Botana, León, Olmos & Hassan-Montero, under review). The results support the idea that the predication algorithm may also be useful for extracting more precise meanings of certain structures from scientific corpora, and that the introduction of some corrections based on vector length may increases its efficiency on non-representative terms.
Polymeric Carriers for Gene Delivery: Chitosan and Poly(amidoamine) Dendrimers
Xu, Qingxing; Wang, Chi-Hwa; Pack, Daniel Wayne
2012-01-01
Gene therapy is a potential medical solution that promises new treatments and may hold the cure for many different types of diseases and disorders of the human race. However, gene therapy is still a growing medical field and the technology is still in its infancy. The main challenge for gene therapy is to find safe and effective vectors that are able to deliver genes to the specific cells and get them to express inside the cells. Due to safety concerns, synthetic delivery systems, rather than viral vectors, are preferred for gene delivery and significant efforts have been focused on the development of this field. However, we are faced with problems like low gene transfer efficiency, cytotoxicity and lack of cell-targeting capability for these synthetic delivery systems. Over the years, we have seen a variety of new and effective polymers which have been designed and synthesized specifically for gene delivery. Moreover, various strategies that aimed at enhancing their physicochemical properties, improving transfection efficiency, reducing cytotoxicity as well as incorporating functional groups that offer better targetability and higher cellular uptake are established. Here, we look at two potential polymeric carriers, chitosan and poly(amidoamine) dendrimers, which have been widely reported for gene delivery. For chitosan, the interest arises from their availability, excellent non-cytotoxicity profile, biodegradability and ease of modification. For poly(amidoamine) dendrimers, the interest arises from their ease of synthesis with controlled structure and size, minimal cytotoxicity, biodegradability and high transfection efficiencies. The latest developments on these polymers for gene delivery will be the main focus of this article. PMID:20618156
Model-Independent Bounds on Kinetic Mixing
Hook, Anson; Izaguirre, Eder; Wacker, Jay G.
2011-01-01
New Abelimore » an vector bosons can kinetically mix with the hypercharge gauge boson of the Standard Model. This letter computes the model-independent limits on vector bosons with masses from 1 GeV to 1 TeV. The limits arise from the numerous e + e − experiments that have been performed in this energy range and bound the kinetic mixing by ϵ ≲ 0.03 for most of the mass range studied, regardless of any additional interactions that the new vector boson may have.« less
Conformal Galilei algebras, symmetric polynomials and singular vectors
NASA Astrophysics Data System (ADS)
Křižka, Libor; Somberg, Petr
2018-01-01
We classify and explicitly describe homomorphisms of Verma modules for conformal Galilei algebras cga_ℓ (d,C) with d=1 for any integer value ℓ \\in N. The homomorphisms are uniquely determined by singular vectors as solutions of certain differential operators of flag type and identified with specific polynomials arising as coefficients in the expansion of a parametric family of symmetric polynomials into power sum symmetric polynomials.
Community detection in sequence similarity networks based on attribute clustering
Chowdhary, Janamejaya; Loeffler, Frank E.; Smith, Jeremy C.
2017-07-24
Networks are powerful tools for the presentation and analysis of interactions in multi-component systems. A commonly studied mesoscopic feature of networks is their community structure, which arises from grouping together similar nodes into one community and dissimilar nodes into separate communities. Here in this paper, the community structure of protein sequence similarity networks is determined with a new method: Attribute Clustering Dependent Communities (ACDC). Sequence similarity has hitherto typically been quantified by the alignment score or its expectation value. However, pair alignments with the same score or expectation value cannot thus be differentiated. To overcome this deficiency, the method constructs,more » for pair alignments, an extended alignment metric, the link attribute vector, which includes the score and other alignment characteristics. Rescaling components of the attribute vectors qualitatively identifies a systematic variation of sequence similarity within protein superfamilies. The problem of community detection is then mapped to clustering the link attribute vectors, selection of an optimal subset of links and community structure refinement based on the partition density of the network. ACDC-predicted communities are found to be in good agreement with gold standard sequence databases for which the "ground truth" community structures (or families) are known. ACDC is therefore a community detection method for sequence similarity networks based entirely on pair similarity information. A serial implementation of ACDC is available from https://cmb.ornl.gov/resources/developments« less
Community detection in sequence similarity networks based on attribute clustering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhary, Janamejaya; Loeffler, Frank E.; Smith, Jeremy C.
Networks are powerful tools for the presentation and analysis of interactions in multi-component systems. A commonly studied mesoscopic feature of networks is their community structure, which arises from grouping together similar nodes into one community and dissimilar nodes into separate communities. Here in this paper, the community structure of protein sequence similarity networks is determined with a new method: Attribute Clustering Dependent Communities (ACDC). Sequence similarity has hitherto typically been quantified by the alignment score or its expectation value. However, pair alignments with the same score or expectation value cannot thus be differentiated. To overcome this deficiency, the method constructs,more » for pair alignments, an extended alignment metric, the link attribute vector, which includes the score and other alignment characteristics. Rescaling components of the attribute vectors qualitatively identifies a systematic variation of sequence similarity within protein superfamilies. The problem of community detection is then mapped to clustering the link attribute vectors, selection of an optimal subset of links and community structure refinement based on the partition density of the network. ACDC-predicted communities are found to be in good agreement with gold standard sequence databases for which the "ground truth" community structures (or families) are known. ACDC is therefore a community detection method for sequence similarity networks based entirely on pair similarity information. A serial implementation of ACDC is available from https://cmb.ornl.gov/resources/developments« less
NASA Technical Reports Server (NTRS)
Walker, H. F.
1979-01-01
In many pattern recognition problems, data vectors are classified although one or more of the data vector elements are missing. This problem occurs in remote sensing when the ground is obscured by clouds. Optimal linear discrimination procedures for classifying imcomplete data vectors are discussed.
NASA Astrophysics Data System (ADS)
Mushlihuddin, R.; Nurafifah; Irvan
2018-01-01
The student’s low ability in mathematics problem solving proved to the less effective of a learning process in the classroom. Effective learning was a learning that affects student’s math skills, one of which is problem-solving abilities. Problem-solving capability consisted of several stages: understanding the problem, planning the settlement, solving the problem as planned, re-examining the procedure and the outcome. The purpose of this research was to know: (1) was there any influence of PBL model in improving ability Problem solving of student math in a subject of vector analysis?; (2) was the PBL model effective in improving students’ mathematical problem-solving skills in vector analysis courses? This research was a quasi-experiment research. The data analysis techniques performed from the test stages of data description, a prerequisite test is the normality test, and hypothesis test using the ANCOVA test and Gain test. The results showed that: (1) there was an influence of PBL model in improving students’ math problem-solving abilities in vector analysis courses; (2) the PBL model was effective in improving students’ problem-solving skills in vector analysis courses with a medium category.
High-dimensional vector semantics
NASA Astrophysics Data System (ADS)
Andrecut, M.
In this paper we explore the “vector semantics” problem from the perspective of “almost orthogonal” property of high-dimensional random vectors. We show that this intriguing property can be used to “memorize” random vectors by simply adding them, and we provide an efficient probabilistic solution to the set membership problem. Also, we discuss several applications to word context vector embeddings, document sentences similarity, and spam filtering.
Cosmology for quadratic gravity in generalized Weyl geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiménez, Jose Beltrán; Heisenberg, Lavinia; Koivisto, Tomi S.
A class of vector-tensor theories arises naturally in the framework of quadratic gravity in spacetimes with linear vector distortion. Requiring the absence of ghosts for the vector field imposes an interesting condition on the allowed connections with vector distortion: the resulting one-parameter family of connections generalises the usual Weyl geometry with polar torsion. The cosmology of this class of theories is studied, focusing on isotropic solutions wherein the vector field is dominated by the temporal component. De Sitter attractors are found and inhomogeneous perturbations around such backgrounds are analysed. In particular, further constraints on the models are imposed by excludingmore » pathologies in the scalar, vector and tensor fluctuations. Various exact background solutions are presented, describing a constant and an evolving dark energy, a bounce and a self-tuning de Sitter phase. However, the latter two scenarios are not viable under a closer scrutiny.« less
Security and Privacy Grand Challenges for the Internet of Things
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fink, Glenn A.; Zarzhitsky, Dimitri V.; Carroll, Thomas E.
Abstract— The growth of the Internet of Things (IoT) is driven by market pressures, and while security is being considered, the relationship between the unintended consequences of billions of such devices connecting to the Internet cannot be described with existing mathematical methods. The possibilities for illicit surveillance through lifestyle analysis, unauthorized access to information, and new attack vectors will continue to increase by 2020, when up-to 50 billion devices may be connected. This paper discusses various kinds of vulnerabilities that can be expected to arise, and presents a research agenda for mitigating the worst of the impacts. We hope tomore » draw research attention to the potential dangers of IoT so that many of these problems can be avoided.« less
Optimal Control Problems with Switching Points. Ph.D. Thesis, 1990 Final Report
NASA Technical Reports Server (NTRS)
Seywald, Hans
1991-01-01
The main idea of this report is to give an overview of the problems and difficulties that arise in solving optimal control problems with switching points. A brief discussion of existing optimality conditions is given and a numerical approach for solving the multipoint boundary value problems associated with the first-order necessary conditions of optimal control is presented. Two real-life aerospace optimization problems are treated explicitly. These are altitude maximization for a sounding rocket (Goddard Problem) in the presence of a dynamic pressure limit, and range maximization for a supersonic aircraft flying in the vertical, also in the presence of a dynamic pressure limit. In the second problem singular control appears along arcs with active dynamic pressure limit, which in the context of optimal control, represents a first-order state inequality constraint. An extension of the Generalized Legendre-Clebsch Condition to the case of singular control along state/control constrained arcs is presented and is applied to the aircraft range maximization problem stated above. A contribution to the field of Jacobi Necessary Conditions is made by giving a new proof for the non-optimality of conjugate paths in the Accessory Minimum Problem. Because of its simple and explicit character, the new proof may provide the basis for an extension of Jacobi's Necessary Condition to the case of the trajectories with interior point constraints. Finally, the result that touch points cannot occur for first-order state inequality constraints is extended to the case of vector valued control functions.
Rate determination from vector observations
NASA Technical Reports Server (NTRS)
Weiss, Jerold L.
1993-01-01
Vector observations are a common class of attitude data provided by a wide variety of attitude sensors. Attitude determination from vector observations is a well-understood process and numerous algorithms such as the TRIAD algorithm exist. These algorithms require measurement of the line of site (LOS) vector to reference objects and knowledge of the LOS directions in some predetermined reference frame. Once attitude is determined, it is a simple matter to synthesize vehicle rate using some form of lead-lag filter, and then, use it for vehicle stabilization. Many situations arise, however, in which rate knowledge is required but knowledge of the nominal LOS directions are not available. This paper presents two methods for determining spacecraft angular rates from vector observations without a priori knowledge of the vector directions. The first approach uses an extended Kalman filter with a spacecraft dynamic model and a kinematic model representing the motion of the observed LOS vectors. The second approach uses a 'differential' TRIAD algorithm to compute the incremental direction cosine matrix, from which vehicle rate is then derived.
Progresses towards safe and efficient gene therapy vectors.
Chira, Sergiu; Jackson, Carlo S; Oprea, Iulian; Ozturk, Ferhat; Pepper, Michael S; Diaconu, Iulia; Braicu, Cornelia; Raduly, Lajos-Zsolt; Calin, George A; Berindan-Neagoe, Ioana
2015-10-13
The emergence of genetic engineering at the beginning of the 1970's opened the era of biomedical technologies, which aims to improve human health using genetic manipulation techniques in a clinical context. Gene therapy represents an innovating and appealing strategy for treatment of human diseases, which utilizes vehicles or vectors for delivering therapeutic genes into the patients' body. However, a few past unsuccessful events that negatively marked the beginning of gene therapy resulted in the need for further studies regarding the design and biology of gene therapy vectors, so that this innovating treatment approach can successfully move from bench to bedside. In this paper, we review the major gene delivery vectors and recent improvements made in their design meant to overcome the issues that commonly arise with the use of gene therapy vectors. At the end of the manuscript, we summarized the main advantages and disadvantages of common gene therapy vectors and we discuss possible future directions for potential therapeutic vectors.
Progresses towards safe and efficient gene therapy vectors
Chira, Sergiu; Jackson, Carlo S.; Oprea, Iulian; Ozturk, Ferhat; Pepper, Michael S.; Diaconu, Iulia; Braicu, Cornelia; Raduly, Lajos-Zsolt; Calin, George A.; Berindan-Neagoe, Ioana
2015-01-01
The emergence of genetic engineering at the beginning of the 1970′s opened the era of biomedical technologies, which aims to improve human health using genetic manipulation techniques in a clinical context. Gene therapy represents an innovating and appealing strategy for treatment of human diseases, which utilizes vehicles or vectors for delivering therapeutic genes into the patients' body. However, a few past unsuccessful events that negatively marked the beginning of gene therapy resulted in the need for further studies regarding the design and biology of gene therapy vectors, so that this innovating treatment approach can successfully move from bench to bedside. In this paper, we review the major gene delivery vectors and recent improvements made in their design meant to overcome the issues that commonly arise with the use of gene therapy vectors. At the end of the manuscript, we summarized the main advantages and disadvantages of common gene therapy vectors and we discuss possible future directions for potential therapeutic vectors. PMID:26362400
Thrips advisor: exploiting thrips-induced defences to combat pests on crops.
Steenbergen, Merel; Abd-El-Haliem, Ahmed; Bleeker, Petra; Dicke, Marcel; Escobar-Bravo, Rocio; Cheng, Gang; Haring, Michel A; Kant, Merijn R; Kappers, Iris; Klinkhamer, Peter G L; Leiss, Kirsten A; Legarrea, Saioa; Macel, Mirka; Mouden, Sanae; Pieterse, Corné M J; Sarde, Sandeep J; Schuurink, Robert C; De Vos, Martin; Van Wees, Saskia C M; Broekgaarden, Colette
2018-04-09
Plants have developed diverse defence mechanisms to ward off herbivorous pests. However, agriculture still faces estimated crop yield losses ranging from 25% to 40% annually. These losses arise not only because of direct feeding damage, but also because many pests serve as vectors of plant viruses. Herbivorous thrips (Thysanoptera) are important pests of vegetable and ornamental crops worldwide, and encompass virtually all general problems of pests: they are highly polyphagous, hard to control because of their complex lifestyle, and they are vectors of destructive viruses. Currently, control management of thrips mainly relies on the use of chemical pesticides. However, thrips rapidly develop resistance to these pesticides. With the rising demand for more sustainable, safer, and healthier food production systems, we urgently need to pinpoint the gaps in knowledge of plant defences against thrips to enable the future development of novel control methods. In this review, we summarize the current, rather scarce, knowledge of thrips-induced plant responses and the role of phytohormonal signalling and chemical defences in these responses. We describe concrete opportunities for breeding resistance against pests such as thrips as a prototype approach for next-generation resistance breeding.
Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koulouri, Alexandra, E-mail: koulouri@uni-muenster.de; Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2BT; Brookes, Mike
In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In thismore » paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field. - Highlights: • Vector tomography is used to reconstruct electric fields generated by dipole sources. • Inverse solutions are based on longitudinal and transverse line integral measurements. • Transverse line integral measurements are used as a sparsity constraint. • Numerical procedure to approximate the line integrals is described in detail. • Patterns of the studied electric fields are correctly estimated.« less
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.; Vides, Jeaniffer; Gurski, Katharine; Nkonga, Boniface; Dumbser, Michael; Garain, Sudip; Audit, Edouard
2016-01-01
Just as the quality of a one-dimensional approximate Riemann solver is improved by the inclusion of internal sub-structure, the quality of a multidimensional Riemann solver is also similarly improved. Such multidimensional Riemann problems arise when multiple states come together at the vertex of a mesh. The interaction of the resulting one-dimensional Riemann problems gives rise to a strongly-interacting state. We wish to endow this strongly-interacting state with physically-motivated sub-structure. The self-similar formulation of Balsara [16] proves especially useful for this purpose. While that work is based on a Galerkin projection, in this paper we present an analogous self-similar formulation that is based on a different interpretation. In the present formulation, we interpret the shock jumps at the boundary of the strongly-interacting state quite literally. The enforcement of the shock jump conditions is done with a least squares projection (Vides, Nkonga and Audit [67]). With that interpretation, we again show that the multidimensional Riemann solver can be endowed with sub-structure. However, we find that the most efficient implementation arises when we use a flux vector splitting and a least squares projection. An alternative formulation that is based on the full characteristic matrices is also presented. The multidimensional Riemann solvers that are demonstrated here use one-dimensional HLLC Riemann solvers as building blocks. Several stringent test problems drawn from hydrodynamics and MHD are presented to show that the method works. Results from structured and unstructured meshes demonstrate the versatility of our method. The reader is also invited to watch a video introduction to multidimensional Riemann solvers on http://www.nd.edu/ dbalsara/Numerical-PDE-Course.
Triangles with Integer Dimensions
ERIC Educational Resources Information Center
Gilbertson, Nicholas J.; Rogers, Kimberly Cervello
2016-01-01
Interesting and engaging mathematics problems can come from anywhere. Sometimes great problems arise from interesting contexts. At other times, interesting problems arise from asking "what if" questions while appreciating the structure and beauty of mathematics. The intriguing problem described in this article resulted from the second…
NASA Astrophysics Data System (ADS)
Byun, Do-Seong; Hart, Deirdre E.
2017-04-01
Regional and/or coastal ocean models can use tidal current harmonic forcing, together with tidal harmonic forcing along open boundaries in order to successfully simulate tides and tidal currents. These inputs can be freely generated using online open-access data, but the data produced are not always at the resolution required for regional or coastal models. Subsequent interpolation procedures can produce tidal current forcing data errors for parts of the world's coastal ocean where tidal ellipse inclinations and phases move across the invisible mathematical "boundaries" between 359° and 0° degrees (or 179° and 0°). In nature, such "boundaries" are in fact smooth transitions, but if these mathematical "boundaries" are not treated correctly during interpolation, they can produce inaccurate input data and hamper the accurate simulation of tidal currents in regional and coastal ocean models. These avoidable errors arise due to procedural shortcomings involving vector embodiment problems (i.e., how a vector is represented mathematically, for example as velocities or as coordinates). Automated solutions for producing correct tidal ellipse parameter input data are possible if a series of steps are followed correctly, including the use of Cartesian coordinates during interpolation. This note comprises the first published description of scenarios where tidal ellipse parameter interpolation errors can arise, and of a procedure to successfully avoid these errors when generating tidal inputs for regional and/or coastal ocean numerical models. We explain how a straightforward sequence of data production, format conversion, interpolation, and format reconversion steps may be used to check for the potential occurrence and avoidance of tidal ellipse interpolation and phase errors. This sequence is demonstrated via a case study of the M2 tidal constituent in the seas around Korea but is designed to be universally applicable. We also recommend employing tidal ellipse parameter calculation methods that avoid the use of Foreman's (1978) "northern semi-major axis convention" since, as revealed in our analysis, this commonly used conversion can result in inclination interpolation errors even when Cartesian coordinate-based "vector embodiment" solutions are employed.
Identification of approximately duplicate material records in ERP systems
NASA Astrophysics Data System (ADS)
Zong, Wei; Wu, Feng; Chu, Lap-Keung; Sculli, Domenic
2017-03-01
The quality of master data is crucial for the accurate functioning of the various modules of an enterprise resource planning (ERP) system. This study addresses specific data problems arising from the generation of approximately duplicate material records in ERP databases. Such problems are mainly due to the firm's lack of unique and global identifiers for the material records, and to the arbitrary assignment of alternative names for the same material by various users. Traditional duplicate detection methods are ineffective in identifying such approximately duplicate material records because these methods typically rely on string comparisons of each field. To address this problem, a machine learning-based framework is developed to recognise semantic similarity between strings and to further identify and reunify approximately duplicate material records - a process referred to as de-duplication in this article. First, the keywords of the material records are extracted to form vectors of discriminating words. Second, a machine learning method using a probabilistic neural network is applied to determine the semantic similarity between these material records. The approach was evaluated using data from a real case study. The test results indicate that the proposed method outperforms traditional algorithms in identifying approximately duplicate material records.
NASA Astrophysics Data System (ADS)
Macher, W.; Oswald, T. H.
2011-02-01
In the investigation of antenna systems which consist of one or several monopoles, a realistic modeling of the monopole radii is not always feasible. In particular, physical scale models for electrolytic tank measurements of effective length vectors (rheometry) of spaceborne monopoles are so small that a correct scaling of monopole radii often results in very thin, flexible antenna wires which bend too much under their own weight. So one has to use monopoles in the model which are thicker than the correct scale diameters. The opposite case, where the monopole radius has to be modeled too thin, appears with certain numerical antenna programs based on wire grid modeling. This problem arises if the underlying algorithm assumes that the wire segments are much longer than their diameters. In such a case it is eventually not possible to use wires of correct thickness to model the monopoles. In order that these numerical and experimental techniques can be applied nonetheless to determine the capacitances and effective length vectors of such monopoles (with an inaccurate modeling of monopole diameters), an analytical correction method is devised. It enables one to calculate the quantities for the real antenna system from those obtained for the model antenna system with wrong monopole radii. Since a typical application of the presented formalism is the analysis of spaceborne antenna systems, an illustration for the monopoles of the WAVES experiment on board the STEREO-A spacecraft is given.
Retrograde spins of near-Earth asteroids from the Yarkovsky effect.
La Spina, A; Paolicchi, P; Kryszczyńska, A; Pravec, P
2004-03-25
Dynamical resonances in the asteroid belt are the gateway for the production of near-Earth asteroids (NEAs). To generate the observed number of NEAs, however, requires the injection of many asteroids into those resonant regions. Collisional processes have long been claimed as a possible source, but difficulties with that idea have led to the suggestion that orbital drift arising from the Yarkovsky effect dominates the injection process. (The Yarkovsky effect is a force arising from differential heating-the 'afternoon' side of an asteroid is warmer than the 'morning' side.) The two models predict different rotational properties of NEAs: the usual collisional theories are consistent with a nearly isotropic distribution of rotation vectors, whereas the 'Yarkovsky model' predicts an excess of retrograde rotations. Here we report that the spin vectors of NEAs show a strong and statistically significant excess of retrograde rotations, quantitatively consistent with the theoretical expectations of the Yarkovsky model.
Sexual selection expedites the evolution of pesticide resistance.
Jacomb, Frances; Marsh, Jason; Holman, Luke
2016-12-01
The evolution of insecticide resistance by crop pests and disease vectors causes serious problems for agriculture and health. Sexual selection can accelerate or hinder adaptation to abiotic challenges in a variety of ways, but the effect of sexual selection on resistance evolution is little studied. Here, we examine this question using experimental evolution in the pest insect Tribolium castaneum. The experimental removal of sexual selection slowed the evolution of resistance in populations treated with pyrethroid pesticide, and also reduced the rate at which resistance was lost from pesticide-free populations. These results suggest that selection arising from variance in mating and fertilization success can augment natural selection on pesticide resistance, meaning that sexual selection should be considered when designing strategies to limit the evolution of pesticide resistance. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
Efficient multitasking of Choleski matrix factorization on CRAY supercomputers
NASA Technical Reports Server (NTRS)
Overman, Andrea L.; Poole, Eugene L.
1991-01-01
A Choleski method is described and used to solve linear systems of equations that arise in large scale structural analysis. The method uses a novel variable-band storage scheme and is structured to exploit fast local memory caches while minimizing data access delays between main memory and vector registers. Several parallel implementations of this method are described for the CRAY-2 and CRAY Y-MP computers demonstrating the use of microtasking and autotasking directives. A portable parallel language, FORCE, is used for comparison with the microtasked and autotasked implementations. Results are presented comparing the matrix factorization times for three representative structural analysis problems from runs made in both dedicated and multi-user modes on both computers. CPU and wall clock timings are given for the parallel implementations and are compared to single processor timings of the same algorithm.
Accelerated Cartesian expansions for the rapid solution of periodic multiscale problems
Baczewski, Andrew David; Dault, Daniel L.; Shanker, Balasubramaniam
2012-07-03
We present an algorithm for the fast and efficient solution of integral equations that arise in the analysis of scattering from periodic arrays of PEC objects, such as multiband frequency selective surfaces (FSS) or metamaterial structures. Our approach relies upon the method of Accelerated Cartesian Expansions (ACE) to rapidly evaluate the requisite potential integrals. ACE is analogous to FMM in that it can be used to accelerate the matrix vector product used in the solution of systems discretized using MoM. Here, ACE provides linear scaling in both CPU time and memory. Details regarding the implementation of this method within themore » context of periodic systems are provided, as well as results that establish error convergence and scalability. In addition, we also demonstrate the applicability of this algorithm by studying several exemplary electrically dense systems.« less
Reversible vector ratchets for skyrmion systems
NASA Astrophysics Data System (ADS)
Ma, X.; Reichhardt, C. J. Olson; Reichhardt, C.
2017-03-01
We show that ac driven skyrmions interacting with an asymmetric substrate provide a realization of a class of ratchet system which we call a vector ratchet that arises due to the effect of the Magnus term on the skyrmion dynamics. In a vector ratchet, the dc motion induced by the ac drive can be described as a vector that can be rotated clockwise or counterclockwise relative to the substrate asymmetry direction. Up to a full 360∘ rotation is possible for varied ac amplitudes or skyrmion densities. In contrast to overdamped systems, in which ratchet motion is always parallel to the substrate asymmetry direction, vector ratchets allow the ratchet motion to be in any direction relative to the substrate asymmetry. It is also possible to obtain a reversal in the direction of rotation of the vector ratchet, permitting the creation of a reversible vector ratchet. We examine vector ratchets for ac drives applied parallel or perpendicular to the substrate asymmetry direction, and show that reverse ratchet motion can be produced by collective effects. No reversals occur for an isolated skyrmion on an asymmetric substrate. Since a vector ratchet can produce motion in any direction, it could represent a method for controlling skyrmion motion for spintronic applications.
The Problems of Diagnosis and Remediation of Dyscalculia.
ERIC Educational Resources Information Center
Price, Nigel; Youe, Simon
2000-01-01
Focuses on the problems of diagnosis and remediation of dyscalculia. Explores whether there is justification for believing that specific difficulty with mathematics arises jointly with a specific language problem, or whether a specific difficulty with mathematics can arise independently of problems with language. Uses a case study to illuminate…
Generalized vector calculus on convex domain
NASA Astrophysics Data System (ADS)
Agrawal, Om P.; Xu, Yufeng
2015-06-01
In this paper, we apply recently proposed generalized integral and differential operators to develop generalized vector calculus and generalized variational calculus for problems defined over a convex domain. In particular, we present some generalization of Green's and Gauss divergence theorems involving some new operators, and apply these theorems to generalized variational calculus. For fractional power kernels, the formulation leads to fractional vector calculus and fractional variational calculus for problems defined over a convex domain. In special cases, when certain parameters take integer values, we obtain formulations for integer order problems. Two examples are presented to demonstrate applications of the generalized variational calculus which utilize the generalized vector calculus developed in the paper. The first example leads to a generalized partial differential equation and the second example leads to a generalized eigenvalue problem, both in two dimensional convex domains. We solve the generalized partial differential equation by using polynomial approximation. A special case of the second example is a generalized isoperimetric problem. We find an approximate solution to this problem. Many physical problems containing integer order integrals and derivatives are defined over arbitrary domains. We speculate that future problems containing fractional and generalized integrals and derivatives in fractional mechanics will be defined over arbitrary domains, and therefore, a general variational calculus incorporating a general vector calculus will be needed for these problems. This research is our first attempt in that direction.
The covariance matrix for the solution vector of an equality-constrained least-squares problem
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1976-01-01
Methods are given for computing the covariance matrix for the solution vector of an equality-constrained least squares problem. The methods are matched to the solution algorithms given in the book, 'Solving Least Squares Problems.'
Guidance and control strategies for aerospace vehicles
NASA Technical Reports Server (NTRS)
Naidu, Desineni S.; Hibey, Joseph L.
1989-01-01
The optimal control problem arising in coplanar orbital transfer employing aeroassist technology and the fuel-optimal control problem arising in orbital transfer vehicles employing aeroassist technology are addressed.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Wiimote Experiments: 3-D Inclined Plane Problem for Reinforcing the Vector Concept
ERIC Educational Resources Information Center
Kawam, Alae; Kouh, Minjoon
2011-01-01
In an introductory physics course where students first learn about vectors, they oftentimes struggle with the concept of vector addition and decomposition. For example, the classic physics problem involving a mass on an inclined plane requires the decomposition of the force of gravity into two directions that are parallel and perpendicular to the…
NASA Astrophysics Data System (ADS)
Lutich, Andrey
2017-07-01
This research considers the problem of generating compact vector representations of physical design patterns for analytics purposes in semiconductor patterning domain. PatterNet uses a deep artificial neural network to learn mapping of physical design patterns to a compact Euclidean hyperspace. Distances among mapped patterns in this space correspond to dissimilarities among patterns defined at the time of the network training. Once the mapping network has been trained, PatterNet embeddings can be used as feature vectors with standard machine learning algorithms, and pattern search, comparison, and clustering become trivial problems. PatterNet is inspired by the concepts developed within the framework of generative adversarial networks as well as the FaceNet. Our method facilitates a deep neural network (DNN) to learn directly the compact representation by supplying it with pairs of design patterns and dissimilarity among these patterns defined by a user. In the simplest case, the dissimilarity is represented by an area of the XOR of two patterns. Important to realize that our PatterNet approach is very different to the methods developed for deep learning on image data. In contrast to "conventional" pictures, the patterns in the CAD world are the lists of polygon vertex coordinates. The method solely relies on the promise of deep learning to discover internal structure of the incoming data and learn its hierarchical representations. Artificial intelligence arising from the combination of PatterNet and clustering analysis very precisely follows intuition of patterning/optical proximity correction experts paving the way toward human-like and human-friendly engineering tools.
AllergenFP: allergenicity prediction by descriptor fingerprints.
Dimitrov, Ivan; Naneva, Lyudmila; Doytchinova, Irini; Bangov, Ivan
2014-03-15
Allergenicity, like antigenicity and immunogenicity, is a property encoded linearly and non-linearly, and therefore the alignment-based approaches are not able to identify this property unambiguously. A novel alignment-free descriptor-based fingerprint approach is presented here and applied to identify allergens and non-allergens. The approach was implemented into a four step algorithm. Initially, the protein sequences are described by amino acid principal properties as hydrophobicity, size, relative abundance, helix and β-strand forming propensities. Then, the generated strings of different length are converted into vectors with equal length by auto- and cross-covariance (ACC). The vectors were transformed into binary fingerprints and compared in terms of Tanimoto coefficient. The approach was applied to a set of 2427 known allergens and 2427 non-allergens and identified correctly 88% of them with Matthews correlation coefficient of 0.759. The descriptor fingerprint approach presented here is universal. It could be applied for any classification problem in computational biology. The set of E-descriptors is able to capture the main structural and physicochemical properties of amino acids building the proteins. The ACC transformation overcomes the main problem in the alignment-based comparative studies arising from the different length of the aligned protein sequences. The conversion of protein ACC values into binary descriptor fingerprints allows similarity search and classification. The algorithm described in the present study was implemented in a specially designed Web site, named AllergenFP (FP stands for FingerPrint). AllergenFP is written in Python, with GIU in HTML. It is freely accessible at http://ddg-pharmfac.net/Allergen FP. idoytchinova@pharmfac.net or ivanbangov@shu-bg.net.
Reversible vector ratchets for skyrmion systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Xiu; Reichhardt, Cynthia Jane Olson; Reichhardt, Charles
In this paper, we show that ac driven skyrmions interacting with an asymmetric substrate provide a realization of a class of ratchet system which we call a vector ratchet that arises due to the effect of the Magnus term on the skyrmion dynamics. In a vector ratchet, the dc motion induced by the ac drive can be described as a vector that can be rotated clockwise or counterclockwise relative to the substrate asymmetry direction. Up to a full 360° rotation is possible for varied ac amplitudes or skyrmion densities. In contrast to overdamped systems, in which ratchet motion is alwaysmore » parallel to the substrate asymmetry direction, vector ratchets allow the ratchet motion to be in any direction relative to the substrate asymmetry. It is also possible to obtain a reversal in the direction of rotation of the vector ratchet, permitting the creation of a reversible vector ratchet. We examine vector ratchets for ac drives applied parallel or perpendicular to the substrate asymmetry direction, and show that reverse ratchet motion can be produced by collective effects. No reversals occur for an isolated skyrmion on an asymmetric substrate. Finally, since a vector ratchet can produce motion in any direction, it could represent a method for controlling skyrmion motion for spintronic applications.« less
Reversible vector ratchets for skyrmion systems
Ma, Xiu; Reichhardt, Cynthia Jane Olson; Reichhardt, Charles
2017-03-03
In this paper, we show that ac driven skyrmions interacting with an asymmetric substrate provide a realization of a class of ratchet system which we call a vector ratchet that arises due to the effect of the Magnus term on the skyrmion dynamics. In a vector ratchet, the dc motion induced by the ac drive can be described as a vector that can be rotated clockwise or counterclockwise relative to the substrate asymmetry direction. Up to a full 360° rotation is possible for varied ac amplitudes or skyrmion densities. In contrast to overdamped systems, in which ratchet motion is alwaysmore » parallel to the substrate asymmetry direction, vector ratchets allow the ratchet motion to be in any direction relative to the substrate asymmetry. It is also possible to obtain a reversal in the direction of rotation of the vector ratchet, permitting the creation of a reversible vector ratchet. We examine vector ratchets for ac drives applied parallel or perpendicular to the substrate asymmetry direction, and show that reverse ratchet motion can be produced by collective effects. No reversals occur for an isolated skyrmion on an asymmetric substrate. Finally, since a vector ratchet can produce motion in any direction, it could represent a method for controlling skyrmion motion for spintronic applications.« less
Aguayo-Ortiz, A; Mendoza, S; Olvera, D
2018-01-01
In this article we develop a Primitive Variable Recovery Scheme (PVRS) to solve any system of coupled differential conservative equations. This method obtains directly the primitive variables applying the chain rule to the time term of the conservative equations. With this, a traditional finite volume method for the flux is applied in order avoid violation of both, the entropy and "Rankine-Hugoniot" jump conditions. The time evolution is then computed using a forward finite difference scheme. This numerical technique evades the recovery of the primitive vector by solving an algebraic system of equations as it is often used and so, it generalises standard techniques to solve these kind of coupled systems. The article is presented bearing in mind special relativistic hydrodynamic numerical schemes with an added pedagogical view in the appendix section in order to easily comprehend the PVRS. We present the convergence of the method for standard shock-tube problems of special relativistic hydrodynamics and a graphical visualisation of the errors using the fluctuations of the numerical values with respect to exact analytic solutions. The PVRS circumvents the sometimes arduous computation that arises from standard numerical methods techniques, which obtain the desired primitive vector solution through an algebraic polynomial of the charges.
Some lemma on spectrum of eigen value regarding power method
NASA Astrophysics Data System (ADS)
Jamali, A. R. M. Jalal Uddin; Alam, Md. Sah
2017-04-01
Eigen value problems arise in almost all science and engineering fields. There exist some smart methods in literature in which most of them are able to find only Eigen values but could not find corresponding Eigen vectors. There exist many engineering as well as scientific fields in which both largest as well as smallest Eigen pairs are required. Power method is very simple but a powerful tool for finding largest Eigen value and corresponding Eigen vector (Eigen-pair). Again Inverse Power method is applied to find out smallest Eigen-pair and/or desire Eigen-pairs. But it is known that Inverse Power method is computationally very costly. On the other hand by using shifting property, Power method can find further Eigen-pairs. But the position of this Eigen value in the set of spectrum of the Eigen values is not identified. In this regard we proposed four lemma associate with Modified Power method. Each Lemma is proved ornately. The Modified Power method is implemented and illustrates an example for the verification of the Lemma. By using lemma the modified power algorithm is able to find out both largest and smallest Eigen-pairs successfully and efficiently in some cases. Moreover by the help of the Lemma, algorithm is able to detect the nature (positive and negative) of the Eigen values.
Preconditioned MoM Solutions for Complex Planar Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fasenfest, B J; Jackson, D; Champagne, N
2004-01-23
The numerical analysis of large arrays is a complex problem. There are several techniques currently under development in this area. One such technique is the FAIM (Faster Adaptive Integral Method). This method uses a modification of the standard AIM approach which takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basis functions, such as the RWG basis. These bases are then projected onto a regular grid of interpolating polynomials. This grid can then be used in a 2D ormore » 3D FFT to accelerate the matrix-vector product used in an iterative solver. The method has been proven to greatly reduce solve time by speeding the matrix-vector product computation. The FAIM approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends FAIM by modifying it to allow for layered material Green's Functions and dielectrics. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the FAIM method is reported in; this contribution is limited to presenting new results.« less
Mendoza, S.; Olvera, D.
2018-01-01
In this article we develop a Primitive Variable Recovery Scheme (PVRS) to solve any system of coupled differential conservative equations. This method obtains directly the primitive variables applying the chain rule to the time term of the conservative equations. With this, a traditional finite volume method for the flux is applied in order avoid violation of both, the entropy and “Rankine-Hugoniot” jump conditions. The time evolution is then computed using a forward finite difference scheme. This numerical technique evades the recovery of the primitive vector by solving an algebraic system of equations as it is often used and so, it generalises standard techniques to solve these kind of coupled systems. The article is presented bearing in mind special relativistic hydrodynamic numerical schemes with an added pedagogical view in the appendix section in order to easily comprehend the PVRS. We present the convergence of the method for standard shock-tube problems of special relativistic hydrodynamics and a graphical visualisation of the errors using the fluctuations of the numerical values with respect to exact analytic solutions. The PVRS circumvents the sometimes arduous computation that arises from standard numerical methods techniques, which obtain the desired primitive vector solution through an algebraic polynomial of the charges. PMID:29659602
Inequalities, Assessment and Computer Algebra
ERIC Educational Resources Information Center
Sangwin, Christopher J.
2015-01-01
The goal of this paper is to examine single variable real inequalities that arise as tutorial problems and to examine the extent to which current computer algebra systems (CAS) can (1) automatically solve such problems and (2) determine whether students' own answers to such problems are correct. We review how inequalities arise in contemporary…
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-10
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
ERIC Educational Resources Information Center
Barniol, Pablo; Zavala, Genaro
2014-01-01
In this article we compare students' understanding of vector concepts in problems with no physical context, and with three mechanics contexts: force, velocity, and work. Based on our "Test of Understanding of Vectors," a multiple-choice test presented elsewhere, we designed two isomorphic shorter versions of 12 items each: a test with no…
Classification of robust heteroclinic cycles for vector fields in {\\protect\\bb R}^3 with symmetry
NASA Astrophysics Data System (ADS)
Hawker, David; Ashwin, Peter
2005-09-01
We consider a classification of robust heteroclinic cycles in the positive octant of {\\bb R}^3 under the action of the symmetry group {{\\bb Z}_2}^3 . We introduce a coding system to represent different classes up to a topological equivalence, and produce a characterization of all types of robust heteroclinic cycle that can arise in this situation. These cycles may or may not contain the origin within the cycle. We proceed to find a connection between our problem and meandric numbers. We find a direct correlation between the number of classes of robust heteroclinic cycle that do not include the origin and the 'Mercedes-Benz' sequence of integers characterizing meanders through a 'Y-shaped' configuration. We investigate upper and lower bounds for the number of classes possible for robust cycles between n equilibria, one of which may be the origin.
Impacts of recreational motorboats on fishes: a review.
Whitfield, A K; Becker, A
2014-06-15
A considerable amount of research has been conducted on the impacts of recreational boating activities on fishes but little or no synthesis of the information has been undertaken. This review shows that motor boats impact on the biology and ecology of fishes but the effects vary according to the species and even particular size classes. Direct hits on fishes by propellers are an obvious impact but this aspect has been poorly documented. Alterations in the wave climate and water turbidity may also influence fishes and their habitats, especially submerged and emergent plant beds. Sound generated by boat motors can also influence the communication and behaviour of certain species. Pollution arising from fuel spillages, exhaust emissions and antifouling paints all have detrimental effects on fishes. Finally, the use of recreational boats as vectors of aquatic invasive organisms is very real and has created major problems to the ecology of aquatic systems. Copyright © 2014 Elsevier Ltd. All rights reserved.
A survey on evolutionary algorithm based hybrid intelligence in bioinformatics.
Li, Shan; Kang, Liying; Zhao, Xing-Ming
2014-01-01
With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks.
Topological features of vector vortex beams perturbed with uniformly polarized light
D’Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo
2017-01-01
Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell’s equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams. PMID:28079134
Topological features of vector vortex beams perturbed with uniformly polarized light
NASA Astrophysics Data System (ADS)
D'Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo
2017-01-01
Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell’s equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams.
Topological features of vector vortex beams perturbed with uniformly polarized light.
D'Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo
2017-01-12
Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell's equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams.
Hollingsworth, T. Déirdre; Pulliam, Juliet R.C.; Funk, Sebastian; Truscott, James E.; Isham, Valerie; Lloyd, Alun L.
2015-01-01
Many of the challenges which face modellers of directly transmitted pathogens also arise when modelling the epidemiology of pathogens with indirect transmission – whether through environmental stages, vectors, intermediate hosts or multiple hosts. In particular, understanding the roles of different hosts, how to measure contact and infection patterns, heterogeneities in contact rates, and the dynamics close to elimination are all relevant challenges, regardless of the mode of transmission. However, there remain a number of challenges that are specific and unique to modelling vector-borne diseases and macroparasites. Moreover, many of the neglected tropical diseases which are currently targeted for control and elimination are vector-borne, macroparasitic, or both, and so this article includes challenges which will assist in accelerating the control of these high-burden diseases. Here, we discuss the challenges of indirect measures of infection in humans, whether through vectors or transmission life stages and in estimating the contribution of different host groups to transmission. We also discuss the issues of “evolution-proof” interventions against vector-borne disease. PMID:25843376
Hollingsworth, T Déirdre; Pulliam, Juliet R C; Funk, Sebastian; Truscott, James E; Isham, Valerie; Lloyd, Alun L
2015-03-01
Many of the challenges which face modellers of directly transmitted pathogens also arise when modelling the epidemiology of pathogens with indirect transmission--whether through environmental stages, vectors, intermediate hosts or multiple hosts. In particular, understanding the roles of different hosts, how to measure contact and infection patterns, heterogeneities in contact rates, and the dynamics close to elimination are all relevant challenges, regardless of the mode of transmission. However, there remain a number of challenges that are specific and unique to modelling vector-borne diseases and macroparasites. Moreover, many of the neglected tropical diseases which are currently targeted for control and elimination are vector-borne, macroparasitic, or both, and so this article includes challenges which will assist in accelerating the control of these high-burden diseases. Here, we discuss the challenges of indirect measures of infection in humans, whether through vectors or transmission life stages and in estimating the contribution of different host groups to transmission. We also discuss the issues of "evolution-proof" interventions against vector-borne disease. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Efficient Parallel Formulations of Hierarchical Methods and Their Applications
NASA Astrophysics Data System (ADS)
Grama, Ananth Y.
1996-01-01
Hierarchical methods such as the Fast Multipole Method (FMM) and Barnes-Hut (BH) are used for rapid evaluation of potential (gravitational, electrostatic) fields in particle systems. They are also used for solving integral equations using boundary element methods. The linear systems arising from these methods are dense and are solved iteratively. Hierarchical methods reduce the complexity of the core matrix-vector product from O(n^2) to O(n log n) and the memory requirement from O(n^2) to O(n). We have developed highly scalable parallel formulations of a hybrid FMM/BH method that are capable of handling arbitrarily irregular distributions. We apply these formulations to astrophysical simulations of Plummer and Gaussian galaxies. We have used our parallel formulations to solve the integral form of the Laplace equation. We show that our parallel hierarchical mat-vecs yield high efficiency and overall performance even on relatively small problems. A problem containing approximately 200K nodes takes under a second to compute on 256 processors and yet yields over 85% efficiency. The efficiency and raw performance is expected to increase for bigger problems. For the 200K node problem, our code delivers about 5 GFLOPS of performance on a 256 processor T3D. This is impressive considering the fact that the problem has floating point divides and roots, and very little locality resulting in poor cache performance. A dense matrix-vector product of the same dimensions would require about 0.5 TeraBytes of memory and about 770 TeraFLOPS of computing speed. Clearly, if the loss in accuracy resulting from the use of hierarchical methods is acceptable, our code yields significant savings in time and memory. We also study the convergence of a GMRES solver built around this mat-vec. We accelerate the convergence of the solver using three preconditioning techniques: diagonal scaling, block-diagonal preconditioning, and inner-outer preconditioning. We study the performance and parallel efficiency of these preconditioned solvers. Using this solver, we solve dense linear systems with hundreds of thousands of unknowns. Solving a 105K unknown problem takes about 10 minutes on a 64 processor T3D. Until very recently, boundary element problems of this magnitude could not even be generated, let alone solved.
Reversible Vector Ratchet Effect in Skyrmion Systems
NASA Astrophysics Data System (ADS)
Ma, Xiaoyu; Reichhardt, Charles; Reichhardt, Cynthia
Magnetic skyrmions are topological non-trivial spin textures found in several magnetic materials. Since their motion can be controlled using ultralow current densities, skyrmions are appealing for potential applications in spintronics as information carriers and processing devices. In this work, we studied the collective transport properties of driven skyrmions based on a particle-like model with molecular dynamics (MD) simulation. Our results show that ac driven skyrmions interacting with an asymmetric substrate provide a realization of a new class of ratchet system, which we call a vector ratchet, that arises due to the effect of the Magnus term on the skyrmion dynamics. In a vector ratchet, the dc motion induced by the ac drive can be described as a vector that can be rotated up to 360 degrees relative to the substrate asymmetry direction. This could represent a new method for controlling skyrmion motion for spintronic applications.
Studies of Solar Helicity Using Vector Magnetograms
NASA Technical Reports Server (NTRS)
Hagyard, Mona J.; Pevstov, Alexei A.
1999-01-01
observations of photospheric magnetic fields made with vector magnetographs have been used recently to study solar helicity. In this paper we indicate what can and cannot be derived from vector magnetograms, and point out some potential problems in these data that could affect the calculations of 'helicity'. Among these problems are magnetic saturation, Faraday rotation, low spectral resolution, and the method of resolving the ambiguity in the azimuth.
Three-dimension reconstruction based on spatial light modulator
NASA Astrophysics Data System (ADS)
Deng, Xuejiao; Zhang, Nanyang; Zeng, Yanan; Yin, Shiliang; Wang, Weiyu
2011-02-01
Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .
Deformation structure analysis of material at fatigue on the basis of the vector field
NASA Astrophysics Data System (ADS)
Kibitkin, Vladimir V.; Solodushkin, Andrey I.; Pleshanov, Vasily S.
2017-12-01
In the paper, spatial distributions of deformation, circulation, and shear amplitudes and shear angles are obtained from the displacement vector field measured by the DIC technique. This vector field and its characteristics of shears and vortices are given as an example of such approach. The basic formulae are also given. The experiment shows that honeycomb deformation structures can arise in the center of a macrovortex at developed plastic flow. The spatial distribution of local circulation and shears is discovered, which coincides with the deformation structure but their amplitudes are different. The analysis proves that the spatial distribution of shear angles is a result of maximum tangential and normal stresses. The anticlockwise circulation of most local vortices obeys the normal Gaussian law in the area of interest.
Effective Numerical Methods for Solving Elliptical Problems in Strengthened Sobolev Spaces
NASA Technical Reports Server (NTRS)
D'yakonov, Eugene G.
1996-01-01
Fourth-order elliptic boundary value problems in the plane can be reduced to operator equations in Hilbert spaces G that are certain subspaces of the Sobolev space W(sub 2)(exp 2)(Omega) is identical with G(sup (2)). Appearance of asymptotically optimal algorithms for Stokes type problems made it natural to focus on an approach that considers rot w is identical with (D(sub 2)w - D(sub 1)w) is identical with vector of u as a new unknown vector-function, which automatically satisfies the condition div vector of u = 0. In this work, we show that this approach can also be developed for an important class of problems from the theory of plates and shells with stiffeners. The main mathematical problem was to show that the well-known inf-sup condition (normal solvability of the divergence operator) holds for special Hilbert spaces. This result is also essential for certain hydrodynamics problems.
(2,2) and (0,4) supersymmetric boundary conditions in 3d N =4 theories and type IIB branes
NASA Astrophysics Data System (ADS)
Chung, Hee-Joong; Okazaki, Tadashi
2017-10-01
The half-BPS boundary conditions preserving N =(2 ,2 ) and N =(0 ,4 ) supersymmetry in 3d N =4 supersymmetric gauge theories are examined. The BPS equations admit decomposition of the bulk supermultiplets into specific boundary supermultiplets of preserved supersymmetry. Nahm-like equations arise in the vector multiplet BPS boundary condition preserving N =(0 ,4 ) supersymmetry, and Robin-type boundary conditions appear for the hypermultiplet coupled to the vector multiplet when N =(2 ,2 ) supersymmetry is preserved. The half-BPS boundary conditions are realized in the brane configurations of type IIB string theory.
NASA Astrophysics Data System (ADS)
Luo, Bin; Lin, Lin; Zhong, ShiSheng
2018-02-01
In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.
Neural net diagnostics for VLSI test
NASA Technical Reports Server (NTRS)
Lin, T.; Tseng, H.; Wu, A.; Dogan, N.; Meador, J.
1990-01-01
This paper discusses the application of neural network pattern analysis algorithms to the IC fault diagnosis problem. A fault diagnostic is a decision rule combining what is known about an ideal circuit test response with information about how it is distorted by fabrication variations and measurement noise. The rule is used to detect fault existence in fabricated circuits using real test equipment. Traditional statistical techniques may be used to achieve this goal, but they can employ unrealistic a priori assumptions about measurement data. Our approach to this problem employs an adaptive pattern analysis technique based on feedforward neural networks. During training, a feedforward network automatically captures unknown sample distributions. This is important because distributions arising from the nonlinear effects of process variation can be more complex than is typically assumed. A feedforward network is also able to extract measurement features which contribute significantly to making a correct decision. Traditional feature extraction techniques employ matrix manipulations which can be particularly costly for large measurement vectors. In this paper we discuss a software system which we are developing that uses this approach. We also provide a simple example illustrating the use of the technique for fault detection in an operational amplifier.
An Evaluation of Feature Learning Methods for High Resolution Image Classification
NASA Astrophysics Data System (ADS)
Tokarczyk, P.; Montoya, J.; Schindler, K.
2012-07-01
Automatic image classification is one of the fundamental problems of remote sensing research. The classification problem is even more challenging in high-resolution images of urban areas, where the objects are small and heterogeneous. Two questions arise, namely which features to extract from the raw sensor data to capture the local radiometry and image structure at each pixel or segment, and which classification method to apply to the feature vectors. While classifiers are nowadays well understood, selecting the right features remains a largely empirical process. Here we concentrate on the features. Several methods are evaluated which allow one to learn suitable features from unlabelled image data by analysing the image statistics. In a comparative study, we evaluate unsupervised feature learning with different linear and non-linear learning methods, including principal component analysis (PCA) and deep belief networks (DBN). We also compare these automatically learned features with popular choices of ad-hoc features including raw intensity values, standard combinations like the NDVI, a few PCA channels, and texture filters. The comparison is done in a unified framework using the same images, the target classes, reference data and a Random Forest classifier.
Lobo, N F; Hua-Van, A; Li, X; Nolen, B M; Fraser, M J
2002-04-01
Mosquito-vectored diseases such as yellow fever and dengue fever continue to have a substantial impact on human populations world-wide. Novel strategies for control of these mosquito vectored diseases can arise through the development of reliable systems for genetic manipulation of the insect vector. A piggyBac vector marked with the Drosophila melanogaster cinnabar (cn) gene was used to transform the white-eyed khw strain of Aedes aegypti. Microinjection of preblastoderm embryos resulted in four families of cinnabar transformed insects. An overall transformation frequency of 4%, with a range of 0% to as high as 13% for individual experiments, was achieved when using a heat-shock induced transposase providing helper plasmid. Southern hybridizations indicated multiple insertion events in three of four transgenic lines, while the presence of duplicated target TTAA sites at either ends of individual insertions confirmed characteristic piggyBac transposition events in these three transgenic lines. The transgenic phenotype has remained stable for more than twenty generations. The transformations effected using the piggyBac element establish the potential of this element as a germ-line transformation vector for Aedine mosquitoes.
EPR-dosimetry of ionizing radiation
NASA Astrophysics Data System (ADS)
Popova, Mariia; Vakhnin, Dmitrii; Tyshchenko, Igor
2017-09-01
This article discusses the problems that arise during the radiation sterilization of medical products. It is propose the solution based on alanine EPR-dosimetry. The parameters of spectrometer and methods of absorbed dose calculation are given. In addition, the problems that arise during heavy particles irradiation are investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.
1995-10-01
Aztec is an iterative library that greatly simplifies the parallelization process when solving the linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. Aztec is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparsemore » unstructured matrices for parallel solution. Once the distributed matrix is created, computation can be performed on any of the parallel machines running Aztec: nCUBE 2, IBM SP2 and Intel Paragon, MPI platforms as well as standard serial and vector platforms. Aztec includes a number of Krylov iterative methods such as conjugate gradient (CG), generalized minimum residual (GMRES) and stabilized biconjugate gradient (BICGSTAB) to solve systems of equations. These Krylov methods are used in conjunction with various preconditioners such as polynomial or domain decomposition methods using LU or incomplete LU factorizations within subdomains. Although the matrix A can be general, the package has been designed for matrices arising from the approximation of partial differential equations (PDEs). In particular, the Aztec package is oriented toward systems arising from PDE applications.« less
Johnson, Margaret E.; Hummer, Gerhard
2012-01-01
We explore the theoretical foundation of different string methods used to find dominant reaction pathways in high-dimensional configuration spaces. Pathways are assessed by the amount of reactive flux they carry and by their orientation relative to the committor function. By examining the effects of transforming between different collective coordinates that span the same underlying space, we unmask artificial coordinate dependences in strings optimized to follow the free energy gradient. In contrast, strings optimized to follow the drift vector produce reaction pathways that are significantly less sensitive to reparameterizations of the collective coordinates. The differences in these paths arise because the drift vector depends on both the free energy gradient and the diffusion tensor of the coarse collective variables. Anisotropy and position dependence of diffusion tensors arise commonly in spaces of coarse variables, whose generally slow dynamics are obtained by nonlinear projections of the strongly coupled atomic motions. We show here that transition paths constructed to account for dynamics by following the drift vector will (to a close approximation) carry the maximum reactive flux both in systems with isotropic position dependent diffusion, and in systems with constant but anisotropic diffusion. We derive a simple method for calculating the committor function along paths that follow the reactive flux. Lastly, we provide guidance for the practical implementation of the dynamic string method. PMID:22616575
Incommensurate Phonon Anomaly and the Nature of Charge Density Waves in Cuprates
Miao, H.; Ishikawa, D.; Heid, R.; ...
2018-01-18
While charge density wave (CDW) instabilities are ubiquitous to superconducting cuprates, the different ordering wave vectors in various cuprate families have hampered a unified description of the CDW formation mechanism. Here, we investigate the temperature dependence of the low-energy phonons in the canonical CDW-ordered cuprate La 1.875Ba 0.125CuO 4. We discover that the phonon softening wave vector associated with CDW correlations becomes temperature dependent in the high-temperature precursor phase and changes from a wave vector of 0.238 reciprocal lattice units (r.l.u.) below the ordering transition temperature to 0.3 r.l.u. at 300 K. This high-temperature behavior also shows that “214”-type cupratesmore » can host CDW correlations at a similar wave vector to previously reported CDW correlations in non-214-type cuprates such as YBa 2Cu 3O 6+δ. This indicates that cuprate CDWs may arise from the same underlying instability despite their apparently different low-temperature ordering wave vectors.« less
Incommensurate Phonon Anomaly and the Nature of Charge Density Waves in Cuprates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, H.; Ishikawa, D.; Heid, R.
While charge density wave (CDW) instabilities are ubiquitous to superconducting cuprates, the different ordering wave vectors in various cuprate families have hampered a unified description of the CDW formation mechanism. Here, we investigate the temperature dependence of the low-energy phonons in the canonical CDW-ordered cuprate La 1.875Ba 0.125CuO 4. We discover that the phonon softening wave vector associated with CDW correlations becomes temperature dependent in the high-temperature precursor phase and changes from a wave vector of 0.238 reciprocal lattice units (r.l.u.) below the ordering transition temperature to 0.3 r.l.u. at 300 K. This high-temperature behavior also shows that “214”-type cupratesmore » can host CDW correlations at a similar wave vector to previously reported CDW correlations in non-214-type cuprates such as YBa 2Cu 3O 6+δ. This indicates that cuprate CDWs may arise from the same underlying instability despite their apparently different low-temperature ordering wave vectors.« less
Blagrove, Marcus S C; Caminade, Cyril; Waldmann, Elisabeth; Sutton, Elizabeth R; Wardeh, Maya; Baylis, Matthew
2017-06-01
Mosquito-borne viruses have been estimated to cause over 100 million cases of human disease annually. Many methodologies have been developed to help identify areas most at risk from transmission of these viruses. However, generally, these methodologies focus predominantly on the effects of climate on either the vectors or the pathogens they spread, and do not consider the dynamic interaction between the optimal conditions for both vector and virus. Here, we use a new approach that considers the complex interplay between the optimal temperature for virus transmission, and the optimal climate for the mosquito vectors. Using published geolocated data we identified temperature and rainfall ranges in which a number of mosquito vectors have been observed to co-occur with West Nile virus, dengue virus or chikungunya virus. We then investigated whether the optimal climate for co-occurrence of vector and virus varies between "warmer" and "cooler" adapted vectors for the same virus. We found that different mosquito vectors co-occur with the same virus at different temperatures, despite significant overlap in vector temperature ranges. Specifically, we found that co-occurrence correlates with the optimal climatic conditions for the respective vector; cooler-adapted mosquitoes tend to co-occur with the same virus in cooler conditions than their warmer-adapted counterparts. We conclude that mosquitoes appear to be most able to transmit virus in the mosquitoes' optimal climate range, and hypothesise that this may be due to proportionally over-extended vector longevity, and other increased fitness attributes, within this optimal range. These results suggest that the threat posed by vector-competent mosquito species indigenous to temperate regions may have been underestimated, whilst the threat arising from invasive tropical vectors moving to cooler temperate regions may be overestimated.
Vector and Raster Data Storage Based on Morton Code
NASA Astrophysics Data System (ADS)
Zhou, G.; Pan, Q.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Liu, X.
2018-05-01
Even though geomatique is so developed nowadays, the integration of spatial data in vector and raster formats is still a very tricky problem in geographic information system environment. And there is still not a proper way to solve the problem. This article proposes a method to interpret vector data and raster data. In this paper, we saved the image data and building vector data of Guilin University of Technology to Oracle database. Then we use ADO interface to connect database to Visual C++ and convert row and column numbers of raster data and X Y of vector data to Morton code in Visual C++ environment. This method stores vector and raster data to Oracle Database and uses Morton code instead of row and column and X Y to mark the position information of vector and raster data. Using Morton code to mark geographic information enables storage of data make full use of storage space, simultaneous analysis of vector and raster data more efficient and visualization of vector and raster more intuitive. This method is very helpful for some situations that need to analyse or display vector data and raster data at the same time.
The Vertical Linear Fractional Initialization Problem
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Hartley, Tom T.
1999-01-01
This paper presents a solution to the initialization problem for a system of linear fractional-order differential equations. The scalar problem is considered first, and solutions are obtained both generally and for a specific initialization. Next the vector fractional order differential equation is considered. In this case, the solution is obtained in the form of matrix F-functions. Some control implications of the vector case are discussed. The suggested method of problem solution is shown via an example.
Using distances between Top-n-gram and residue pairs for protein remote homology detection.
Liu, Bin; Xu, Jinghao; Zou, Quan; Xu, Ruifeng; Wang, Xiaolong; Chen, Qingcai
2014-01-01
Protein remote homology detection is one of the central problems in bioinformatics, which is important for both basic research and practical application. Currently, discriminative methods based on Support Vector Machines (SVMs) achieve the state-of-the-art performance. Exploring feature vectors incorporating the position information of amino acids or other protein building blocks is a key step to improve the performance of the SVM-based methods. Two new methods for protein remote homology detection were proposed, called SVM-DR and SVM-DT. SVM-DR is a sequence-based method, in which the feature vector representation for protein is based on the distances between residue pairs. SVM-DT is a profile-based method, which considers the distances between Top-n-gram pairs. Top-n-gram can be viewed as a profile-based building block of proteins, which is calculated from the frequency profiles. These two methods are position dependent approaches incorporating the sequence-order information of protein sequences. Various experiments were conducted on a benchmark dataset containing 54 families and 23 superfamilies. Experimental results showed that these two new methods are very promising. Compared with the position independent methods, the performance improvement is obvious. Furthermore, the proposed methods can also provide useful insights for studying the features of protein families. The better performance of the proposed methods demonstrates that the position dependant approaches are efficient for protein remote homology detection. Another advantage of our methods arises from the explicit feature space representation, which can be used to analyze the characteristic features of protein families. The source code of SVM-DT and SVM-DR is available at http://bioinformatics.hitsz.edu.cn/DistanceSVM/index.jsp.
On efficient randomized algorithms for finding the PageRank vector
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Dmitriev, D. Yu.
2015-03-01
Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.
Hidden symmetry in the confined hydrogen atom problem
NASA Astrophysics Data System (ADS)
Pupyshev, Vladimir I.; Scherbinin, Andrei V.
2002-07-01
The classical counterpart of the well-known quantum mechanical model of a spherically confined hydrogen atom is examined in terms of the Lenz vector, a dynamic variable featuring the conventional Kepler problem. It is shown that a conditional conservation law associated with the Lenz vector is true, in fair agreement with the corresponding quantum problem previously found to exhibit a hidden symmetry as well.
NASA Technical Reports Server (NTRS)
Rutishauser, David
2006-01-01
The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters that attempts to minimize execution time, while staying within resource constraints. The flexibility of using a custom reconfigurable implementation is exploited in a unique manner to leverage the lessons learned in vector supercomputer development. The vector processing framework is tailored to the application, with variable parameters that are fixed in traditional vector processing. Benchmark data that demonstrates the functionality and utility of the approach is presented. The benchmark data includes an identified bottleneck in a real case study example vector code, the NASA Langley Terminal Area Simulation System (TASS) application.
A k-Vector Approach to Sampling, Interpolation, and Approximation
NASA Astrophysics Data System (ADS)
Mortari, Daniele; Rogers, Jonathan
2013-12-01
The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.
Test of understanding of vectors: A reliable multiple-choice vector concept test
NASA Astrophysics Data System (ADS)
Barniol, Pablo; Zavala, Genaro
2014-06-01
In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended problems in which a total of 2067 students participated. Using this taxonomy, we then designed a 20-item multiple-choice test [Test of understanding of vectors (TUV)] and administered it in English to 423 students who were completing the required sequence of introductory physics courses at a large private Mexican university. We evaluated the test's content validity, reliability, and discriminatory power. The results indicate that the TUV is a reliable assessment tool. We also conducted a detailed analysis of the students' understanding of the vector concepts evaluated in the test. The TUV is included in the Supplemental Material as a resource for other researchers studying vector learning, as well as instructors teaching the material.
NASA Astrophysics Data System (ADS)
Austen, M. C.; Crowe, T. P.; Elliott, M.; Paterson, D. M.; Peck, M. A.; Piraino, S.
2018-02-01
Human use of the European marine environment is increasing and diversifying. This is creating new mechanisms for human induced-changes in marine life which need to be understood and quantified as well as the impact of these changes on ecosystems, their structures (e.g. biodiversity) and functioning (e.g. productivity), and the social and economic consequences that arise. The current and emerging pressures are multiple and interacting, arising, for example, from transport, platforms for renewable and non-renewable energy, exploitation of living and non-living resources, agricultural and industrial discharges, together with wider environmental changes (including climate change). Anticipating the future consequences of these pressures and vectors of change for marine life and of adaptation and mitigation measures (such as the introduction of new technologies and structures, new ballast water practices, ocean and offshore wind energy devices and new fishing strategies) is a prerequisite to the development and implementation of strategies, policies and regulations to manage the marine environment, such as the IMO Convention on ballast water management and the EU Maritime Policy and Marine Strategy Framework Directive.
TWSVR: Regression via Twin Support Vector Machine.
Khemchandani, Reshma; Goyal, Keshav; Chandra, Suresh
2016-02-01
Taking motivation from Twin Support Vector Machine (TWSVM) formulation, Peng (2010) attempted to propose Twin Support Vector Regression (TSVR) where the regressor is obtained via solving a pair of quadratic programming problems (QPPs). In this paper we argue that TSVR formulation is not in the true spirit of TWSVM. Further, taking motivation from Bi and Bennett (2003), we propose an alternative approach to find a formulation for Twin Support Vector Regression (TWSVR) which is in the true spirit of TWSVM. We show that our proposed TWSVR can be derived from TWSVM for an appropriately constructed classification problem. To check the efficacy of our proposed TWSVR we compare its performance with TSVR and classical Support Vector Regression(SVR) on various regression datasets. Copyright © 2015 Elsevier Ltd. All rights reserved.
Vectorized program architectures for supercomputer-aided circuit design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzoli, V.; Ferlito, M.; Neri, A.
1986-01-01
Vector processors (supercomputers) can be effectively employed in MIC or MMIC applications to solve problems of large numerical size such as broad-band nonlinear design or statistical design (yield optimization). In order to fully exploit the capabilities of a vector hardware, any program architecture must be structured accordingly. This paper presents a possible approach to the ''semantic'' vectorization of microwave circuit design software. Speed-up factors of the order of 50 can be obtained on a typical vector processor (Cray X-MP), with respect to the most powerful scaler computers (CDC 7600), with cost reductions of more than one order of magnitude. Thismore » could broaden the horizon of microwave CAD techniques to include problems that are practically out of the reach of conventional systems.« less
NASA Astrophysics Data System (ADS)
Hoover, Wm. G.; Hoover, Carol G.
2012-02-01
We compare the Gram-Schmidt and covariant phase-space-basis-vector descriptions for three time-reversible harmonic oscillator problems, in two, three, and four phase-space dimensions respectively. The two-dimensional problem can be solved analytically. The three-dimensional and four-dimensional problems studied here are simultaneously chaotic, time-reversible, and dissipative. Our treatment is intended to be pedagogical, for use in an updated version of our book on Time Reversibility, Computer Simulation, and Chaos. Comments are very welcome.
Orbits of Two-Body Problem From the Lenz Vector
ERIC Educational Resources Information Center
Caplan, S.; And Others
1978-01-01
Obtains the orbits with reference to the center of mass of two bodies under mutual universe square law interaction by use of the eccentricity vector which is equivalent to the Lenz vector within a numerical factor. (Author/SL)
NASA Astrophysics Data System (ADS)
Pearle, Philip
1982-03-01
In the problem of the gambler's ruin, a classic problem in probability theory, a number of gamblers play against each other until all but one of them is “wiped out.” It is shown that this problem is identical to a previously presented formulation of the reduction of the state vector, so that the state vectors in a linear superposition may be regarded as “playing” against each other until all but one of them is “wiped out.” This is a useful part of the description of an objectively real universe represented by a state vector that is a superposition of macroscopically distinguishable states dynamically created by the Hamiltonian and destroyed by the reduction mechanism.
Lefschetz thimbles in fermionic effective models with repulsive vector-field
NASA Astrophysics Data System (ADS)
Mori, Yuto; Kashiwa, Kouji; Ohnishi, Akira
2018-06-01
We discuss two problems in complexified auxiliary fields in fermionic effective models, the auxiliary sign problem associated with the repulsive vector-field and the choice of the cut for the scalar field appearing from the logarithmic function. In the fermionic effective models with attractive scalar and repulsive vector-type interaction, the auxiliary scalar and vector fields appear in the path integral after the bosonization of fermion bilinears. When we make the path integral well-defined by the Wick rotation of the vector field, the oscillating Boltzmann weight appears in the partition function. This "auxiliary" sign problem can be solved by using the Lefschetz-thimble path-integral method, where the integration path is constructed in the complex plane. Another serious obstacle in the numerical construction of Lefschetz thimbles is caused by singular points and cuts induced by multivalued functions of the complexified scalar field in the momentum integration. We propose a new prescription which fixes gradient flow trajectories on the same Riemann sheet in the flow evolution by performing the momentum integration in the complex domain.
Test of Understanding of Vectors: A Reliable Multiple-Choice Vector Concept Test
ERIC Educational Resources Information Center
Barniol, Pablo; Zavala, Genaro
2014-01-01
In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended…
ERIC Educational Resources Information Center
Kwon, Oh Hoon
2012-01-01
This dissertation documents a new way of conceptualizing vectors in college mathematics, especially in geometry. First, I will introduce three problems to show the complexity and subtlety of the construct of vectors with the classical vector representations. These highlight the need for a new framework that: (1) differentiates abstraction from a…
Complex equiangular tight frames
NASA Astrophysics Data System (ADS)
Tropp, Joel A.
2005-08-01
A complex equiangular tight frame (ETF) is a tight frame consisting of N unit vectors in Cd whose absolute inner products are identical. One may view complex ETFs as a natural geometric generalization of an orthonormal basis. Numerical evidence suggests that these objects do not arise for most pairs (d, N). The goal of this paper is to develop conditions on (d, N) under which complex ETFs can exist. In particular, this work concentrates on the class of harmonic ETFs, in which the components of the frame vectors are roots of unity. In this case, it is possible to leverage field theory to obtain stringent restrictions on the possible values for (d, N).
NASA Technical Reports Server (NTRS)
Moorthi, Shrinivas; Higgins, R. W.
1993-01-01
An efficient, direct, second-order solver for the discrete solution of a class of two-dimensional separable elliptic equations on the sphere (which generally arise in implicit and semi-implicit atmospheric models) is presented. The method involves a Fourier transformation in longitude and a direct solution of the resulting coupled second-order finite-difference equations in latitude. The solver is made efficient by vectorizing over longitudinal wave-number and by using a vectorized fast Fourier transform routine. It is evaluated using a prescribed solution method and compared with a multigrid solver and the standard direct solver from FISHPAK.
High-capacity 'gutless' adenoviral vectors.
Kochanek, S; Schiedner, G; Volpers, C
2001-10-01
Adenoviral vectors are promising gene transfer vehicles for different gene therapy applications. High-capacity adenoviral (HC-Ad) vectors address some of the problems that have been observed with replication-defective, E1-deleted first-generation adenoviral vectors: toxicity and immunogenicity due to viral gene expression and 7 to 8 kb capacity limit for the transport of therapeutic DNA. This review summarizes HC-Ad vector-related publications from the past 18 months that are mainly concerned with vector design/production and in vivo applications in different murine models.
Bullinaria, John A; Levy, Joseph P
2012-09-01
In a previous article, we presented a systematic computational study of the extraction of semantic representations from the word-word co-occurrence statistics of large text corpora. The conclusion was that semantic vectors of pointwise mutual information values from very small co-occurrence windows, together with a cosine distance measure, consistently resulted in the best representations across a range of psychologically relevant semantic tasks. This article extends that study by investigating the use of three further factors--namely, the application of stop-lists, word stemming, and dimensionality reduction using singular value decomposition (SVD)--that have been used to provide improved performance elsewhere. It also introduces an additional semantic task and explores the advantages of using a much larger corpus. This leads to the discovery and analysis of improved SVD-based methods for generating semantic representations (that provide new state-of-the-art performance on a standard TOEFL task) and the identification and discussion of problems and misleading results that can arise without a full systematic study.
NASA Astrophysics Data System (ADS)
Nguyen, Van-Dung; Wu, Ling; Noels, Ludovic
2017-03-01
This work provides a unified treatment of arbitrary kinds of microscopic boundary conditions usually considered in the multi-scale computational homogenization method for nonlinear multi-physics problems. An efficient procedure is developed to enforce the multi-point linear constraints arising from the microscopic boundary condition either by the direct constraint elimination or by the Lagrange multiplier elimination methods. The macroscopic tangent operators are computed in an efficient way from a multiple right hand sides linear system whose left hand side matrix is the stiffness matrix of the microscopic linearized system at the converged solution. The number of vectors at the right hand side is equal to the number of the macroscopic kinematic variables used to formulate the microscopic boundary condition. As the resolution of the microscopic linearized system often follows a direct factorization procedure, the computation of the macroscopic tangent operators is then performed using this factorized matrix at a reduced computational time.
Vector Doppler: spatial sampling analysis and presentation techniques for real-time systems
NASA Astrophysics Data System (ADS)
Capineri, Lorenzo; Scabia, Marco; Masotti, Leonardo F.
2001-05-01
The aim of the vector Doppler (VD) technique is the quantitative reconstruction of a velocity field independently of the ultrasonic probe axis to flow angle. In particular vector Doppler is interesting for studying vascular pathologies related to complex blood flow conditions. Clinical applications require a real-time operating mode and the capability to perform Doppler measurements over a defined volume. The combination of these two characteristics produces a real-time vector velocity map. In previous works the authors investigated the theory of pulsed wave (PW) vector Doppler and developed an experimental system capable of producing off-line 3D vector velocity maps. Afterwards, for producing dynamic velocity vector maps, we realized a new 2D vector Doppler system based on a modified commercial echograph. The measurement and presentation of a vector velocity field requires a correct spatial sampling that must satisfy the Shannon criterion. In this work we tackled this problem, establishing a relationship between sampling steps and scanning system characteristics. Another problem posed by the vector Doppler technique is the data representation in real-time that should be easy to interpret for the physician. With this in mine we attempted a multimedia solution that uses both interpolated images and sound to represent the information of the measured vector velocity map. These presentation techniques were experimented for real-time scanning on flow phantoms and preliminary measurements in vivo on a human carotid artery.
750 GeV diphotons: implications for supersymmetric unification II
Hall, Lawrence J.; Harigaya, Keisuke; Nomura, Yasunori
2016-07-29
Perturbative supersymmetric gauge coupling unification is possible in six theories where complete SU (5) TeV-scale multiplets of vector matter account for the size of the reported 750 GeV diphoton resonance, interpreted as a singlet multiplet S=(s+ia)/√2. One of these has a full generation of vector matter and a unified gauge coupling αG ~ 1. The diphoton signal rate is enhanced by loops of vector squarks and sleptons, especially when the trilinear A couplings are large. If the SH uH d coupling is absent, both s and a can contribute to the resonance, which may then have a large apparent widthmore » if the mass splitting from s and a arises from loops of vector matter. The width depends sensitively on A parameters and phases of the vector squark and slepton masses. Vector quarks and/or squarks are expected to be in reach of the LHC. If the SH uH d coupling is present, a leads to a narrow diphoton resonance, while a second resonance with decays s → hh, W +W – , ZZ is likely to be discovered at future LHC runs. In some of the theories a non-standard origin or running of the soft parameters is required, for example involving conformal hidden sector interactions.« less
Vertebrate reservoirs and secondary epidemiological cycles of vector-borne diseases.
Kock, R A
2015-04-01
Vector-borne diseases of importance to human and domestic animal health are listed and the increasing emergence of syndromes, new epidemiological cycles and distributions are highlighted. These diseases involve a multitude of vectors and hosts, frequently for the same pathogen, and involve natural enzootic cycles, wild reservoirs and secondary epidemiological cycles, sometimes affecting humans and domestic animals. On occasions the main reservoir is in the domestic environment. Drivers for secondary cycles are mainly related to human impacts and activities and therefore, for purposes of prevention and control, the focus needs to be on the socioecology of the diseases. Technical and therapeutical solutions exist, and for control there needs to be a clear understanding of the main vertebrate hosts or reservoirs and the main vectors. The targets of interventions are usually the vector and/or secondary epidemiological cycles and, in the case of humans and domestic animals, the spillover or incidental hosts are treated. More attention needs to be given to the importance of the political economy in relation to vector-borne diseases, as many key drivers arise from globalisation, climate change and changes in structural ecologies. Attention to reducing the risk of emergence of new infection cycles through better management of the human-animal-environment interface is urgently needed.
On sufficient statistics of least-squares superposition of vector sets.
Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M
2015-06-01
The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.
Min Lu; Michael J. Wingfield; Nancy Gillette; Jiang-Hua Sun
2011-01-01
Novel genotypes often arise during biological invasions, but their role in invasion success has rarely been elucidated. Here we examined the population genetics and behavior of the fungus, Leptographium procerum, vectored by a highly invasive bark beetle, Dendroctonus valens, to determine whether genetic changes in the fungus...
A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models
NASA Astrophysics Data System (ADS)
Keller, J. D.; Bach, L.; Hense, A.
2012-12-01
The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.
Hybrid NN/SVM Computational System for Optimizing Designs
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2009-01-01
A computational method and system based on a hybrid of an artificial neural network (NN) and a support vector machine (SVM) (see figure) has been conceived as a means of maximizing or minimizing an objective function, optionally subject to one or more constraints. Such maximization or minimization could be performed, for example, to optimize solve a data-regression or data-classification problem or to optimize a design associated with a response function. A response function can be considered as a subset of a response surface, which is a surface in a vector space of design and performance parameters. A typical example of a design problem that the method and system can be used to solve is that of an airfoil, for which a response function could be the spatial distribution of pressure over the airfoil. In this example, the response surface would describe the pressure distribution as a function of the operating conditions and the geometric parameters of the airfoil. The use of NNs to analyze physical objects in order to optimize their responses under specified physical conditions is well known. NN analysis is suitable for multidimensional interpolation of data that lack structure and enables the representation and optimization of a succession of numerical solutions of increasing complexity or increasing fidelity to the real world. NN analysis is especially useful in helping to satisfy multiple design objectives. Feedforward NNs can be used to make estimates based on nonlinear mathematical models. One difficulty associated with use of a feedforward NN arises from the need for nonlinear optimization to determine connection weights among input, intermediate, and output variables. It can be very expensive to train an NN in cases in which it is necessary to model large amounts of information. Less widely known (in comparison with NNs) are support vector machines (SVMs), which were originally applied in statistical learning theory. In terms that are necessarily oversimplified to fit the scope of this article, an SVM can be characterized as an algorithm that (1) effects a nonlinear mapping of input vectors into a higher-dimensional feature space and (2) involves a dual formulation of governing equations and constraints. One advantageous feature of the SVM approach is that an objective function (which one seeks to minimize to obtain coefficients that define an SVM mathematical model) is convex, so that unlike in the cases of many NN models, any local minimum of an SVM model is also a global minimum.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
Discontinuous finite element method for vector radiative transfer
NASA Astrophysics Data System (ADS)
Wang, Cun-Hai; Yi, Hong-Liang; Tan, He-Ping
2017-03-01
The discontinuous finite element method (DFEM) is applied to solve the vector radiative transfer in participating media. The derivation in a discrete form of the vector radiation governing equations is presented, in which the angular space is discretized by the discrete-ordinates approach with a local refined modification, and the spatial domain is discretized into finite non-overlapped discontinuous elements. The elements in the whole solution domain are connected by modelling the boundary numerical flux between adjacent elements, which makes the DFEM numerically stable for solving radiative transfer equations. Several various problems of vector radiative transfer are tested to verify the performance of the developed DFEM, including vector radiative transfer in a one-dimensional parallel slab containing a Mie/Rayleigh/strong forward scattering medium and a two-dimensional square medium. The fact that DFEM results agree very well with the benchmark solutions in published references shows that the developed DFEM in this paper is accurate and effective for solving vector radiative transfer problems.
Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718
Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.
Abad-Franch, Fernando; Valença-Barbosa, Carolina; Sarquis, Otília; Lima, Marli M.
2014-01-01
Background Vector-borne diseases are major public health concerns worldwide. For many of them, vector control is still key to primary prevention, with control actions planned and evaluated using vector occurrence records. Yet vectors can be difficult to detect, and vector occurrence indices will be biased whenever spurious detection/non-detection records arise during surveys. Here, we investigate the process of Chagas disease vector detection, assessing the performance of the surveillance method used in most control programs – active triatomine-bug searches by trained health agents. Methodology/Principal Findings Control agents conducted triplicate vector searches in 414 man-made ecotopes of two rural localities. Ecotope-specific ‘detection histories’ (vectors or their traces detected or not in each individual search) were analyzed using ordinary methods that disregard detection failures and multiple detection-state site-occupancy models that accommodate false-negative and false-positive detections. Mean (±SE) vector-search sensitivity was ∼0.283±0.057. Vector-detection odds increased as bug colonies grew denser, and were lower in houses than in most peridomestic structures, particularly woodpiles. False-positive detections (non-vector fecal streaks misidentified as signs of vector presence) occurred with probability ∼0.011±0.008. The model-averaged estimate of infestation (44.5±6.4%) was ∼2.4–3.9 times higher than naïve indices computed assuming perfect detection after single vector searches (11.4–18.8%); about 106–137 infestation foci went undetected during such standard searches. Conclusions/Significance We illustrate a relatively straightforward approach to addressing vector detection uncertainty under realistic field survey conditions. Standard vector searches had low sensitivity except in certain singular circumstances. Our findings suggest that many infestation foci may go undetected during routine surveys, especially when vector density is low. Undetected foci can cause control failures and induce bias in entomological indices; this may confound disease risk assessment and mislead program managers into flawed decision making. By helping correct bias in naïve indices, the approach we illustrate has potential to critically strengthen vector-borne disease control-surveillance systems. PMID:25233352
A Worksheet to Enhance Students’ Conceptual Understanding in Vector Components
NASA Astrophysics Data System (ADS)
Wutchana, Umporn; Emarat, Narumon
2017-09-01
With and without physical context, we explored 59 undergraduate students’conceptual and procedural understanding of vector components using both open ended problems and multiple choice items designed based on research instruments used in physics education research. The results showed that a number of students produce errors and revealed alternative conceptions especially when asked to draw graphical form of vector components. It indicated that most of them did not develop a strong foundation of understanding in vector components and could not apply those concepts to such problems with physical context. Based on the findings, we designed a worksheet to enhance the students’ conceptual understanding in vector components. The worksheet is composed of three parts which help students to construct their own understanding of definition, graphical form, and magnitude of vector components. To validate the worksheet, focus group discussions of 3 and 10 graduate students (science in-service teachers) had been conducted. The modified worksheet was then distributed to 41 grade 9 students in a science class. The students spent approximately 50 minutes to complete the worksheet. They sketched and measured vectors and its components and compared with the trigonometry ratio to condense the concepts of vector components. After completing the worksheet, their conceptual model had been verified. 83% of them constructed the correct model of vector components.
Parallel-vector solution of large-scale structural analysis problems on supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1989-01-01
A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.
Weak vector boson production with many jets at the LHC √{s }=13 TeV
NASA Astrophysics Data System (ADS)
Anger, F. R.; Febres Cordero, F.; Höche, S.; Maître, D.
2018-05-01
Signatures with an electroweak vector boson and many jets play a crucial role at the Large Hadron Collider, both in the measurement of Standard-Model parameters and in searches for new physics. Precise predictions for these multiscale processes are therefore indispensable. We present next-to-leading order QCD predictions for W±/Z +jets at √{s }=13 TeV , including up to five/four jets in the final state. All production channels are included, and leptonic decays of the vector bosons are considered at the amplitude level. We assess theoretical uncertainties arising from renormalization- and factorization-scale dependence by considering fixed-order dynamical scales based on the HT variable as well as on the MiNLO procedure. We also explore uncertainties associated with different choices of parton-distribution functions. We provide event samples that can be explored through publicly available n -tuple sets, generated with BlackHat in combination with Sherpa.
A unified development of several techniques for the representation of random vectors and data sets
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1973-01-01
Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.
An optimal control strategies using vaccination and fogging in dengue fever transmission model
NASA Astrophysics Data System (ADS)
Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan
2017-08-01
This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.
Light weakly coupled axial forces: models, constraints, and projections
Kahn, Yonatan; Krnjaic, Gordan; Mishra-Sharma, Siddharth; ...
2017-05-01
Here, we investigate the landscape of constraints on MeV-GeV scale, hidden U(1) forces with nonzero axial-vector couplings to Standard Model fermions. While the purely vector-coupled dark photon, which may arise from kinetic mixing, is a well-motivated scenario, several MeV-scale anomalies motivate a theory with axial couplings which can be UV-completed consistent with Standard Model gauge invariance. Moreover, existing constraints on dark photons depend on products of various combinations of axial and vector couplings, making it difficult to isolate the e ects of axial couplings for particular flavors of SM fermions. We present a representative renormalizable, UV-complete model of a darkmore » photon with adjustable axial and vector couplings, discuss its general features, and show how some UV constraints may be relaxed in a model with nonrenormalizable Yukawa couplings at the expense of fine-tuning. We survey the existing parameter space and the projected reach of planned experiments, brie y commenting on the relevance of the allowed parameter space to low-energy anomalies in π 0 and 8Be* decay.« less
Light weakly coupled axial forces: models, constraints, and projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahn, Yonatan; Krnjaic, Gordan; Mishra-Sharma, Siddharth
Here, we investigate the landscape of constraints on MeV-GeV scale, hidden U(1) forces with nonzero axial-vector couplings to Standard Model fermions. While the purely vector-coupled dark photon, which may arise from kinetic mixing, is a well-motivated scenario, several MeV-scale anomalies motivate a theory with axial couplings which can be UV-completed consistent with Standard Model gauge invariance. Moreover, existing constraints on dark photons depend on products of various combinations of axial and vector couplings, making it difficult to isolate the e ects of axial couplings for particular flavors of SM fermions. We present a representative renormalizable, UV-complete model of a darkmore » photon with adjustable axial and vector couplings, discuss its general features, and show how some UV constraints may be relaxed in a model with nonrenormalizable Yukawa couplings at the expense of fine-tuning. We survey the existing parameter space and the projected reach of planned experiments, brie y commenting on the relevance of the allowed parameter space to low-energy anomalies in π 0 and 8Be* decay.« less
Trial and error: how the unclonable human mitochondrial genome was cloned in yeast.
Bigger, Brian W; Liao, Ai-Yin; Sergijenko, Ana; Coutelle, Charles
2011-11-01
Development of a human mitochondrial gene delivery vector is a critical step in the ability to treat diseases arising from mutations in mitochondrial DNA. Although we have previously cloned the mouse mitochondrial genome in its entirety and developed it as a mitochondrial gene therapy vector, the human mitochondrial genome has been dubbed unclonable in E. coli, due to regions of instability in the D-loop and tRNA(Thr) gene. We tested multi- and single-copy vector systems for cloning human mitochondrial DNA in E. coli and Saccharomyces cerevisiae, including transformation-associated recombination. Human mitochondrial DNA is unclonable in E. coli and cannot be retained in multi- or single-copy vectors under any conditions. It was, however, possible to clone and stably maintain the entire human mitochondrial genome in yeast as long as a single-copy centromeric plasmid was used. D-loop and tRNA(Thr) were both stable and unmutated. This is the first report of cloning the entire human mitochondrial genome and the first step in developing a gene delivery vehicle for human mitochondrial gene therapy.
Moving frames and prolongation algebras
NASA Technical Reports Server (NTRS)
Estabrook, F. B.
1982-01-01
Differential ideals generated by sets of 2-forms which can be written with constant coefficients in a canonical basis of 1-forms are considered. By setting up a Cartan-Ehresmann connection, in a fiber bundle over a base space in which the 2-forms live, one finds an incomplete Lie algebra of vector fields in the fields in the fibers. Conversely, given this algebra (a prolongation algebra), one can derive the differential ideal. The two constructs are thus dual, and analysis of either derives properties of both. Such systems arise in the classical differential geometry of moving frames. Examples of this are discussed, together with examples arising more recently: the Korteweg-de Vries and Harrison-Ernst systems.
Non-universal Z‧ from fluxed GUTs
NASA Astrophysics Data System (ADS)
Crispim Romao, Miguel; King, Stephen F.; Leontaris, George K.
2018-07-01
We make a first systematic study of non-universal TeV scale neutral gauge bosons Z‧ arising naturally from a class of F-theory inspired models broken via SU (5) by flux. The phenomenological models we consider may originate from semi-local F-theory GUTs arising from a single E8 point of local enhancement, assuming the minimal Z2 monodromy in order to allow for a renormalisable top quark Yukawa coupling. We classify such non-universal anomaly-free U(1) ‧ models requiring a minimal low energy spectrum and also allowing for a vector-like family. We discuss to what extent such models can account for the anomalous B-decay ratios RK and RK*.
NASA Technical Reports Server (NTRS)
Ranganathan, Raj P.; Dao, Bui V.
1992-01-01
A variety of heat transfer problems arise in the design of the Superconducting Super Collider (SSC). One class of problems is to minimize heat leak from the ambient to the SSC rings, since the rings contain superconducting magnets maintained at a temperature of 4 K. Another arises from the need to dump the beam of protrons (traveling around the SSC rings) on to absorbers during an abort of the collider. Yet another category of problems is the cooling of equipment to dissipate the heat generated during operation. An overview of these problems and sample heat transfer results are given in this paper.
Broken SU(3) x SU(3) x SU(3) x SU(3) Symmetry
DOE R&D Accomplishments Database
Freund, P. G. O.; Nambu, Y.
1964-10-01
We argue that the "Eight-fold Way" version of the SU(3) symmetry should be extended to a product of up to four separate and badly broken SU(3) groups, including the gamma{sub 5} type SU(3) symmetry. A hierarchy of subgroups (or subalgebras) are considered within this framework, and two candidates are found to be interesting in view of experimental evidence. Main features of the theory are: 1) the baryons belong to a nonet; 2) there is an octet of axial vector gauge mesons in addition to one or two octets of vector mesons; 3) pseudoscalar and scalar mesons exist as "incomplete" multiplets arising from spontaneous breakdown of symmetry.
Molenaar, Dylan; de Boeck, Paul
2018-06-01
In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.
Structure of weakly 2-dependent siphons
NASA Astrophysics Data System (ADS)
Chao, Daniel Yuh; Chen, Jiun-Ting
2013-09-01
Deadlocks arising from insufficiently marked siphons in flexible manufacturing systems can be controlled by adding monitors to each siphon - too many for large systems. Li and Zhou add monitors to elementary siphons only while controlling the rest of (called dependent) siphons by adjusting control depth variables of elementary siphons. Only a linear number of monitors are required. The control of weakly dependent siphons (WDSs) is rather conservative since only positive terms were considered. The structure for strongly dependent siphons (SDSs) has been studied earlier. Based on this structure, the optimal sequence of adding monitors has been discovered earlier. Better controllability has been discovered to achieve faster and more permissive control. The results have been extended earlier to S3PGR2 (systems of simple sequential processes with general resource requirements). This paper explores the structures for WDSs, which, as found in this paper, involve elementary resource circuits interconnecting at more than (for SDSs, exactly) one resource place. This saves the time to compute compound siphons, their complementary sets and T-characteristic vectors. Also it allows us (1) to improve the controllability of WDSs and control siphons and (2) to avoid the time to find independent vectors for elementary siphons. We propose a sufficient and necessary test for adjusting control depth variables in S3PR (systems of simple sequential processes with resources) to avoid the sufficient-only time-consuming linear integer programming test (LIP) (Nondeterministic Polynomial (NP) time complete problem) required previously for some cases.
Applications of conformal field theory to problems in 2D percolation
NASA Astrophysics Data System (ADS)
Simmons, Jacob Joseph Harris
This thesis explores critical two-dimensional percolation in bounded regions in the continuum limit. The main method which we employ is conformal field theory (CFT). Our specific results follow from the null-vector structure of the c = 0 CFT that applies to critical two-dimensional percolation. We also make use of the duality symmetry obeyed at the percolation point, and the fact that percolation may be understood as the q-state Potts model in the limit q → 1. Our first results describe the correlations between points in the bulk and boundary intervals or points, i.e. the probability that the various points or intervals are in the same percolation cluster. These quantities correspond to order-parameter profiles under the given conditions, or cluster connection probabilities. We consider two specific cases: an anchoring interval, and two anchoring points. We derive results for these and related geometries using the CFT null-vectors for the corresponding boundary condition changing (bcc) operators. In addition, we exhibit several exact relationships between these probabilities. These relations between the various bulk-boundary connection probabilities involve parameters of the CFT called operator product expansion (OPE) coefficients. We then compute several of these OPE coefficients, including those arising in our new probability relations. Beginning with the familiar CFT operator φ1,2, which corresponds to a free-fixed spin boundary change in the q-state Potts model, we then develop physical interpretations of the bcc operators. We argue that, when properly normalized, higher-order bcc operators correspond to successive fusions of multiple φ1,2, operators. Finally, by identifying the derivative of φ1,2 with the operator φ1,4, we derive several new quantities called first crossing densities. These new results are then combined and integrated to obtain the three previously known crossing quantities in a rectangle: the probability of a horizontal crossing cluster, the probability of a cluster crossing both horizontally and vertically, and the expected number of horizontal crossing clusters. These three results were known to be solutions to a certain fifth-order differential equation, but until now no physically meaningful explanation had appeared. This differential equation arises naturally in our derivation.
Vectorization on the star computer of several numerical methods for a fluid flow problem
NASA Technical Reports Server (NTRS)
Lambiotte, J. J., Jr.; Howser, L. M.
1974-01-01
A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.
Modelling polarization dependent absorption: The vectorial Lambert-Beer law
NASA Astrophysics Data System (ADS)
Franssens, G.
2014-07-01
The scalar Lambert-Beer law, describing the absorption of unpolarized light travelling through a linear non-scattering medium, is simple, well-known, and mathematically trivial. However, when we take the polarization of light into account and consider a medium with polarization dependent absorption, we now need a Vectorial Lambert-Beer Law (VLBL) to quantify this interaction. Such a generalization of the scalar Lambert-Beer law appears not to be readily available. A careful study of this topic reveals that it is not a trivial problem. We will see that the VLBL is not and cannot be a straightforward vectorized version of its scalar counterpart. The aim of the work is to present the general form of the VLBL and to explain how it arises. A reasonable starting point to derive the VLBL is the Vectorial Radiative Transfer Equation (VRTE), which models the absorption and scattering of (partially) polarized light travelling through a linear medium. When we turn off scattering, the VRTE becomes an infinitesimal model for the VLBL holding in the medium. By integrating this equation, we expect to find the VLBL. Surprisingly, this is not the end of the story. It turns out that light propagation through a medium with polarization-dependent absorption is mathematically not that trivial. The trickiness behind the VLBL can be understood in the following terms. The matrix in the VLBL, relating any input Stokes vector to the corresponding output Stokes vector, must necessarily be a Mueller matrix. The subset of invertible Mueller matrices forms a Lie group. It is known that this Lie group contains the ortho-chronous Lorentz group as a subgroup. The group manifold of this subgroup has a (well-known) non-trivial topology. Consequently, the manifold of the Lie group of Mueller matrices also has (at least the same, but likely a more general) non-trivial topology (the full extent of which is not yet known). The type of non-trivial topology, possessed by the manifold of (invertible) Mueller matrices and which stems from the ortho-chronous Lorentz group, already implies (by a theorem from Lie group theory) that the infinitesimal VRTE model for the VLBL is not guaranteed to produce in general the correct finite model (i.e., the VLBL itself) upon integration. What happens is that the non-trivial topology acts as an obstruction that prevents the (matrix) exponential function to reach the correct Mueller matrix (for the medium at hand), because it is too far away from the identity matrix. This means that, for certain media, the VLBL obtained by integrating the VRTE may be different from the VLBL that one would actually measure. Basically, we have here an example of a physical problem that cannot be completely described by a differential equation! The following more concrete example further illustrates the problem. Imagine a slab of matter, showing polarization dependent absorption but negligible scattering, and consider its Mueller matrix for forward propagating plane waves. Will the measured Mueller matrix of such a slab always have positive determinant? There is no apparent mathematical nor physical reason why this (or any) Mueller matrix must have positive determinant. On the other hand, our VRTE model with scattering turned off will always generate a Mueller matrix with positive determinant. This particular example also presents a nice challenge and opportunity for the experimenter: demonstrate the existence of a medium of the envisioned type having a Mueller matrix with non-positive determinant! Lie group theory not only explains when and why we cannot trust a differential equation, but also offers a way out of such a situation if it arises. Applied to our problem, Lie group theory in addition yields the general form of the VLBL. More details will be given in the presentation.
Hu, Wenjun; Chung, Fu-Lai; Wang, Shitong
2012-03-01
Although pattern classification has been extensively studied in the past decades, how to effectively solve the corresponding training on large datasets is a problem that still requires particular attention. Many kernelized classification methods, such as SVM and SVDD, can be formulated as the corresponding quadratic programming (QP) problems, but computing the associated kernel matrices requires O(n2)(or even up to O(n3)) computational complexity, where n is the size of the training patterns, which heavily limits the applicability of these methods for large datasets. In this paper, a new classification method called the maximum vector-angular margin classifier (MAMC) is first proposed based on the vector-angular margin to find an optimal vector c in the pattern feature space, and all the testing patterns can be classified in terms of the maximum vector-angular margin ρ, between the vector c and all the training data points. Accordingly, it is proved that the kernelized MAMC can be equivalently formulated as the kernelized Minimum Enclosing Ball (MEB), which leads to a distinctive merit of MAMC, i.e., it has the flexibility of controlling the sum of support vectors like v-SVC and may be extended to a maximum vector-angular margin core vector machine (MAMCVM) by connecting the core vector machine (CVM) method with MAMC such that the corresponding fast training on large datasets can be effectively achieved. Experimental results on artificial and real datasets are provided to validate the power of the proposed methods. Copyright © 2011 Elsevier Ltd. All rights reserved.
Constrains on the South Atlantic Anomaly from Réunion Island
NASA Astrophysics Data System (ADS)
Béguin, A.; de Groot, L. V.
2017-12-01
The South Atlantic Anomaly (SAA) is a region where the geomagnetic field intensity is about half as strong as would be expected from the current geomagnetic dipole moment that arises from geomagnetic field models. Those field models predict a westward movement of the SAA and predicts its origin East of Africa around 1500 AD. The onset and evolution of the SAA, however, are poorly constrained due to a lack of full-vector paleomagnetic data from Africa and the Indian Ocean for the past centuries. Here we present a full-vector paleosecular variation (PSV) curve for Réunion Island (21°S, 55°E) located East the African continent, in the region that currently shows the fastest increase in geomagnetic field strength in contrast to the average global decay. We sampled 27 sites covering the last 700 years, and subjected them to a directional and multi-method paleointensity study. The obtained directional records reveal shallower inclinations and less variation in the declination compared to current geomagnetic field model predictions. Scrutinizing the IZZI-Thellier, Multispecimen, and calibrated pseudo-Thellier results produces a coherent paleointensity record. The predicted intensity trend from the geomagnetic field models generally agrees with the trend in our data, however, the high paleointensities are higher than the models predict, and the low paleointensities are lower than the models. This illustrates the inevitable smoothing inherent to geomagnetic field modelling. We will discuss the constraints on the onset of the SAA that arise from the new full-vector PSV curve for Réunion that we present and the implications for the past and future evolution of this geomagnetic phenomenon.
Notes on S-folds and {N} = 3 theories
NASA Astrophysics Data System (ADS)
Agarwal, Prarit; Amariti, Antonio
2016-09-01
We consider D3 branes in presence of an S-fold plane. The latter is a non-perturbative object, arising from the combined projection of an S-duality twist and a discrete orbifold of the R-symmetry group. This construction naively gives rise to 4d {N} = 3 SCFTs. Nevertheless it has been observed that in some cases supersymmetry is enhanced to {N} = 4. In this paper we study the explicit counting of degrees of freedom arising from vector multiplets associated to strings suspended between the D3 branes probing the S-fold. We propose that, for trivial discrete torsion, there is no vector multiplet associated to (1, 0) strings stretched between a brane and its image. We then focus on the case of rank 2 {N} = 3 theory that enhances to SU(3) {N} = 4 SYM, explicitly spelling out the isomorphism between the BPS-spectrum of the manifestly {N} = 3 theory and that of three D3 branes in flat spacetime. Subsequently, we consider 3-pronged strings in these setups and show how wall-crossing in the S-fold background implies wall crossing in the flat geometry. This can be considered a consistency check of the conjectured SUSY enhancement. We also find that the above isomorphism implies that a (1, 0) string, suspended between a brane and its image in the S-fold, corresponds to a 3-string junction in the flat geometry. This is in agreement with our claim on the absence of a vector multiplet associated to such (1, 0) strings. This is because the 3-string junction in flat geometry gives rise to a 1/4-th BPS multiplet of the {N} = 4 algebra. Such multiplets always include particles with spin > 1 as opposed to a vector multiplet which is restricted by the requirement that the spins must be ≤ 1.
Model-based VQ for image data archival, retrieval and distribution
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1995-01-01
An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.
Okamoto, Kenichi W; Gould, Fred; Lloyd, Alun L
2016-03-01
Many vector-borne diseases lack effective vaccines and medications, and the limitations of traditional vector control have inspired novel approaches based on using genetic engineering to manipulate vector populations and thereby reduce transmission. Yet both the short- and long-term epidemiological effects of these transgenic strategies are highly uncertain. If neither vaccines, medications, nor transgenic strategies can by themselves suffice for managing vector-borne diseases, integrating these approaches becomes key. Here we develop a framework to evaluate how clinical interventions (i.e., vaccination and medication) can be integrated with transgenic vector manipulation strategies to prevent disease invasion and reduce disease incidence. We show that the ability of clinical interventions to accelerate disease suppression can depend on the nature of the transgenic manipulation deployed (e.g., whether vector population reduction or replacement is attempted). We find that making a specific, individual strategy highly effective may not be necessary for attaining public-health objectives, provided suitable combinations can be adopted. However, we show how combining only partially effective antimicrobial drugs or vaccination with transgenic vector manipulations that merely temporarily lower vector competence can amplify disease resurgence following transient suppression. Thus, transgenic vector manipulation that cannot be sustained can have adverse consequences-consequences which ineffective clinical interventions can at best only mitigate, and at worst temporarily exacerbate. This result, which arises from differences between the time scale on which the interventions affect disease dynamics and the time scale of host population dynamics, highlights the importance of accounting for the potential delay in the effects of deploying public health strategies on long-term disease incidence. We find that for systems at the disease-endemic equilibrium, even modest perturbations induced by weak interventions can exhibit strong, albeit transient, epidemiological effects. This, together with our finding that under some conditions combining strategies could have transient adverse epidemiological effects suggests that a relatively long time horizon may be necessary to discern the efficacy of alternative intervention strategies.
Okamoto, Kenichi W.; Gould, Fred; Lloyd, Alun L.
2016-01-01
Many vector-borne diseases lack effective vaccines and medications, and the limitations of traditional vector control have inspired novel approaches based on using genetic engineering to manipulate vector populations and thereby reduce transmission. Yet both the short- and long-term epidemiological effects of these transgenic strategies are highly uncertain. If neither vaccines, medications, nor transgenic strategies can by themselves suffice for managing vector-borne diseases, integrating these approaches becomes key. Here we develop a framework to evaluate how clinical interventions (i.e., vaccination and medication) can be integrated with transgenic vector manipulation strategies to prevent disease invasion and reduce disease incidence. We show that the ability of clinical interventions to accelerate disease suppression can depend on the nature of the transgenic manipulation deployed (e.g., whether vector population reduction or replacement is attempted). We find that making a specific, individual strategy highly effective may not be necessary for attaining public-health objectives, provided suitable combinations can be adopted. However, we show how combining only partially effective antimicrobial drugs or vaccination with transgenic vector manipulations that merely temporarily lower vector competence can amplify disease resurgence following transient suppression. Thus, transgenic vector manipulation that cannot be sustained can have adverse consequences—consequences which ineffective clinical interventions can at best only mitigate, and at worst temporarily exacerbate. This result, which arises from differences between the time scale on which the interventions affect disease dynamics and the time scale of host population dynamics, highlights the importance of accounting for the potential delay in the effects of deploying public health strategies on long-term disease incidence. We find that for systems at the disease-endemic equilibrium, even modest perturbations induced by weak interventions can exhibit strong, albeit transient, epidemiological effects. This, together with our finding that under some conditions combining strategies could have transient adverse epidemiological effects suggests that a relatively long time horizon may be necessary to discern the efficacy of alternative intervention strategies. PMID:26962871
Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua
2018-02-01
High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
NASA Technical Reports Server (NTRS)
Gary, G. Allen; Hagyard, M. J.
1990-01-01
Off-center vector magnetograms which use all three components of the measured field provide the maximum information content from the photospheric field and can provide the most consistent potential field independent of the viewing angle by defining the normal component of the field. The required transformations of the magnetic field vector and the geometric mapping of the observed field in the image plane into the heliographic plane have been described. Here we discuss the total transformation of specific vector magnetograms to detail the problems and procedures that one should be aware of in analyzing observational magnetograms. The effect of the 180-deg ambiguity of the observed transverse field is considered as well as the effect of curvature of the photosphere. Specific results for active regions AR 2684 (September 23, 1980) and AR 4474 (April 26, 1984) from the Marshall Space Flight Center Vector magnetograph are described which point to the need for the heliographic projection in determining the field structure of an active region.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction.
Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc
2017-11-01
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...
2017-07-03
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
Large General Purpose Frame for Studying Force Vectors
ERIC Educational Resources Information Center
Heid, Christy; Rampolla, Donald
2011-01-01
Many illustrations and problems on the vector nature of forces have weights and forces in a vertical plane. One of the common devices for studying the vector nature of forces is a horizontal "force table," in which forces are produced by weights hanging vertically and transmitted to cords in a horizontal plane. Because some students have…
ERIC Educational Resources Information Center
Yaacob, Yuzita; Wester, Michael; Steinberg, Stanly
2010-01-01
This paper presents a prototype of a computer learning assistant ILMEV (Interactive Learning-Mathematica Enhanced Vector calculus) package with the purpose of helping students to understand the theory and applications of integration in vector calculus. The main problem for students using Mathematica is to convert a textbook description of a…
PREFACE: XXIst International Symposium on the Jahn-Teller Effect 2012
NASA Astrophysics Data System (ADS)
Koizumi, Hiroyasu
2013-04-01
(The PDF contains the full conference program, the list of sponsors and the conference poster.) The 21st International Symposium on the Jahn-Teller effect was held at the University of Tsukuba, Japan, from 26-31 August 2012. People from 23 different countries participated and the number of registered participants was 118. In this symposium, the phrase 'Jahn-Teller effect' was taken to have a rather broad meaning. We discussed the Jahn-Teller and pseudo Jahn-Teller distortions. We also discussed general vibronic problems, and the problems associated with the conical intersections of the potential energy surfaces. As is indicated in the subtitle of the present symposium, 'Physics and Chemistry of Symmetry Breaking', a number of different topics concerning symmetry breaking were also extensively discussed. In particular, we had many discussions on magnetism, ferroelectricity, and superconductivity. A subtle but important problem that was dealt with was the appearance of multi-valuedness in the use of multi-component wave functions. In the Jahn-Teller problems, we almost always use the multi-component wave functions, thus, the knowledge of the proper handling of multi-valuedness is very important. Digital computers are not good at dealing with multi-valuedness, but we need to somehow handle it in our calculations. A very well known example of successful handling is found in the problem of the molecular system with the conical intersection: we cannot obtain the solution that satisfies the single-valuedness of wave functions (SVWF) just using the potential energy surface generated by a package program, and solving the Schrödinger equation with the quantum Hamiltonian constructed from the classical counterpart by replacing the classical variables with the corresponding operators; however, if a gauge potential is included and the double-valuedness of the electronic wave functions around the conical intersections is taken into account, the solution that satisfies the SVWF is obtained. A related problem also arises when dealing with the so-called adiabatic-diabatic transformation (ADT) that removes coupling terms between different Born-Oppenheimer electronic states. It is known that an exact ADT does not exist in general, however, digital computers do this impossible task erroneously if we just plug in numbers. The results obtained may be good in practice; however, we need to be aware that such calculations may miss some important details. I asked Professor Mead to write a note on this matter since there is still confusion in the treatment of the ADT. The proper handling on the ADT may be a topic in the next Jahn-Teller symposium. Although more than a quarter of a century has passed since its discovery, the mechanism of cuprate superconductivity is still actively discussed. In the cuprate, the multi-valuedness problem arises when the conduction electrons create spin-vortices and the twisting of the spin basis occurs. Since a number of experiments and theories indicate the presence of spin-vortices in the cuprate, a proper handling of the multi-valuedness arising from the spin-degree-of-freedom will be important. It has been argued that such multi-valuedness induces a vector potential that generates the persistent current. As the papers in this proceedings indicate, the Jahn-Teller effects are ubiquitous in physics and chemistry. The ideas and methodologies developed in this community have very wide applicability. I believe that this community will continue to contribute to the advancement of science in a fundamental way. Hiroyasu Koizumi Tsukuba, February 2013 Conference photograph
E11, brane dynamics and duality symmetries
NASA Astrophysics Data System (ADS)
West, Peter
2018-05-01
Following arXiv:hep-th/0412336 we use the nonlinear realisation of the semi-direct product of E11 and its vector representation to construct brane dynamics. The brane moves through a space-time which arises in the nonlinear realisation from the vector representation and it contains the usual embedding coordinates as well as the worldvolume fields. The resulting equations of motion are first order in derivatives and can be thought of as duality relations. Each brane carries the full E11 symmetry and so the Cremmer-Julia duality symmetries. We apply this theory to find the dynamics of the IIA and IIB strings, the M2 and M5 branes, the IIB D3 brane as well as the one and two branes in seven dimensions.
Deformations of vector-scalar models
NASA Astrophysics Data System (ADS)
Barnich, Glenn; Boulanger, Nicolas; Henneaux, Marc; Julia, Bernard; Lekeu, Victor; Ranjbar, Arash
2018-02-01
Abelian vector fields non-minimally coupled to uncharged scalar fields arise in many contexts. We investigate here through algebraic methods their consistent deformations ("gaugings"), i.e., the deformations that preserve the number (but not necessarily the form or the algebra) of the gauge symmetries. Infinitesimal consistent deformations are given by the BRST cohomology classes at ghost number zero. We parametrize explicitly these classes in terms of various types of global symmetries and corresponding Noether currents through the characteristic cohomology related to antifields and equations of motion. The analysis applies to all ghost numbers and not just ghost number zero. We also provide a systematic discussion of the linear and quadratic constraints on these parameters that follow from higher-order consistency. Our work is relevant to the gaugings of extended supergravities.
Ghost instabilities of cosmological models with vector fields nonminimally coupled to the curvature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Himmetoglu, Burak; Peloso, Marco; Contaldi, Carlo R.
2009-12-15
We prove that many cosmological models characterized by vectors nonminimally coupled to the curvature (such as the Turner-Widrow mechanism for the production of magnetic fields during inflation, and models of vector inflation or vector curvaton) contain ghosts. The ghosts are associated with the longitudinal vector polarization present in these models and are found from studying the sign of the eigenvalues of the kinetic matrix for the physical perturbations. Ghosts introduce two main problems: (1) they make the theories ill defined at the quantum level in the high energy/subhorizon regime (and create serious problems for finding a well-behaved UV completion), andmore » (2) they create an instability already at the linearized level. This happens because the eigenvalue corresponding to the ghost crosses zero during the cosmological evolution. At this point the linearized equations for the perturbations become singular (we show that this happens for all the models mentioned above). We explicitly solve the equations in the simplest cases of a vector without a vacuum expectation value in a Friedmann-Robertson-Walker geometry, and of a vector with a vacuum expectation value plus a cosmological constant, and we show that indeed the solutions of the linearized equations diverge when these equations become singular.« less
NASA Astrophysics Data System (ADS)
Pan, Wenyong; Geng, Yu; Innanen, Kristopher A.
2018-05-01
The problem of inverting for multiple physical parameters in the subsurface using seismic full-waveform inversion (FWI) is complicated by interparameter trade-off arising from inherent ambiguities between different physical parameters. Parameter resolution is often characterized using scattering radiation patterns, but these neglect some important aspects of interparameter trade-off. More general analysis and mitigation of interparameter trade-off in isotropic-elastic FWI is possible through judiciously chosen multiparameter Hessian matrix-vector products. We show that products of multiparameter Hessian off-diagonal blocks with model perturbation vectors, referred to as interparameter contamination kernels, are central to the approach. We apply the multiparameter Hessian to various vectors designed to provide information regarding the strengths and characteristics of interparameter contamination, both locally and within the whole volume. With numerical experiments, we observe that S-wave velocity perturbations introduce strong contaminations into density and phase-reversed contaminations into P-wave velocity, but themselves experience only limited contaminations from other parameters. Based on these findings, we introduce a novel strategy to mitigate the influence of interparameter trade-off with approximate contamination kernels. Furthermore, we recommend that the local spatial and interparameter trade-off of the inverted models be quantified using extended multiparameter point spread functions (EMPSFs) obtained with pre-conditioned conjugate-gradient algorithm. Compared to traditional point spread functions, the EMPSFs appear to provide more accurate measurements for resolution analysis, by de-blurring the estimations, scaling magnitudes and mitigating interparameter contamination. Approximate eigenvalue volumes constructed with stochastic probing approach are proposed to evaluate the resolution of the inverted models within the whole model. With a synthetic Marmousi model example and a land seismic field data set from Hussar, Alberta, Canada, we confirm that the new inversion strategy suppresses the interparameter contamination effectively and provides more reliable density estimations in isotropic-elastic FWI as compared to standard simultaneous inversion approach.
Vector Addition: Effect of the Context and Position of the Vectors
NASA Astrophysics Data System (ADS)
Barniol, Pablo; Zavala, Genaro
2010-10-01
In this article we investigate the effect of: 1) the context, and 2) the position of the vectors, on 2D vector addition tasks. We administered a test to 512 students completing introductory physics courses at a private Mexican university. In the first part, we analyze students' responses in three isomorphic problems: displacements, forces, and no physical context. Students were asked to draw two vectors and the vector sum. We analyzed students' procedures detecting the difficulties when drawing the vector addition and proved that the context matters, not only compared to the context-free case but also between the contexts. In the second part, we analyze students' responses with three different arrangements of the sum of two vectors: tail-to-tail, head-to-tail and separated vectors. We compared the frequencies of the errors in the three different positions to deduce students' conceptions in the addition of vectors.
Van den Akker, Alithe L; Prinzie, Peter; Deković, Maja; De Haan, Amaranta D; Asscher, Jessica J; Widiger, Thomas
2013-12-01
This study investigated the development of personality extremity (deviation of an average midpoint of all 5 personality dimensions together) across childhood and adolescence, as well as relations between personality extremity and adjustment problems. For 598 children (mean age at Time 1 = 7.5 years), mothers and fathers reported the Big Five personality dimensions 4 times across 8 years. Children's vector length in a 5-dimensional configuration of the Big Five dimensions represented personality extremity. Mothers, fathers, and teachers reported children's internalizing and externalizing problems at the 1st and final measurement. In a cohort-sequential design, we modeled personality extremity in children and adolescents from ages 6 to 17 years. Growth mixture modeling revealed a similar solution for both mother and father reports: a large group with relatively short vectors that were stable over time (mother reports: 80.3%; father reports: 84.7%) and 2 smaller groups with relatively long vectors (i.e., extreme personality configuration). One group started out relatively extreme and decreased over time (mother reports: 13.2%; father reports: 10.4%), whereas the other group started out only slightly higher than the short vector group but increased across time (mother reports: 6.5%; father reports: 4.9%). Children who belonged to the increasingly extreme class experienced more internalizing and externalizing problems in late adolescence, controlling for previous levels of adjustment problems and the Big Five personality dimensions. Personality extremity may be important to consider when identifying children at risk for adjustment problems. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Ferguson, Christopher J; Ceranoglu, T Atilla
2014-03-01
Pathological gaming (PG) behaviors are behaviors which interfere with other life responsibilities. Continued debate exists regarding whether symptoms of PG behaviors are a unique phenomenon or arise from other mental health problems, including attention problems. Development of attention problems and occurrence of pathological gaming in 144 adolescents were followed during a 1-year prospective analysis. Teens and their parents reported on pathological gaming behaviors, attention problems, and current grade point average, as well as several social variables. Results were analyzed using regression and path analysis. Attention problems tended to precede pathological gaming behaviors, but the inverse was not true. Attention problems but not pathological gaming predicted lower GPA 1 year later. Current results suggest that pathological gaming arises from attention problems, but not the inverse. These results suggest that pathological gaming behaviors are symptomatic of underlying attention related mental health issues, rather than a unique phenomenon.
750 GeV diphotons: implications for supersymmetric unification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Lawrence J.; Harigaya, Keisuke; Nomura, Yasunori
2016-03-03
A recent signal of 750 GeV diphotons at the LHC can be explained within the framework of supersymmetric unification by the introduction of vector quarks and leptons with Yukawa couplings to a singlet S that describes the 750 GeV resonance. We study the most general set of theories that allow successful gauge coupling unification, and find that these Yukawa couplings are severely constrained by renormalization group behavior: they are independent of ultraviolet physics and flow to values at the TeV scale that we calculate precisely. As a consequence the vector quarks and leptons must be light; typically in the regionmore » of 375 GeV to 700 GeV, and in certain cases up to 1 TeV. The 750 GeV resonance may have a width less than the experimental resolution; alternatively, with the mass splitting between scalar and pseudoscalar components of S arising from one-loop diagrams involving vector fermions, we compute an apparent width of 10s of GeV.« less
Associated production of a Higgs boson at NNLO
Campbell, John M.; Ellis, R. Keith; Williams, Ciaran
2016-06-30
Here we present a Next-to-Next-to Leading Order (NNLO) calculation of the production of a Higgs boson in association with a massive vector boson. We also include the decays of the unstable Higgs and vector bosons, resulting in a fully flexible parton-level Monte Carlo implementation. We also include allmore » $$\\mathcal{O}(\\alpha_s^2)$$ contributions that occur in production for these processes: those mediated by the exchange of a single off-shell vector boson in the $s$-channel, and those which arise from the coupling of the Higgs boson to a closed loop of fermions. Final states of interest for Run II phenomenology were studied, namely $$H\\rightarrow b\\bar{b}$$, $$\\gamma\\gamma$$ and $WW^*$. The treatment of the $$H\\rightarrow b\\bar{b}$$ decay includes QCD corrections at NLO. We use the recently developed $N$-jettiness regularization procedure, and study its viability in the presence of a large final-state phase space by studying $$pp\\rightarrow V(H\\rightarrow WW^*) \\rightarrow$$ leptons.« less
Beyond generalized Proca theories
NASA Astrophysics Data System (ADS)
Heisenberg, Lavinia; Kase, Ryotaro; Tsujikawa, Shinji
2016-09-01
We consider higher-order derivative interactions beyond second-order generalized Proca theories that propagate only the three desired polarizations of a massive vector field besides the two tensor polarizations from gravity. These new interactions follow the similar construction criteria to those arising in the extension of scalar-tensor Horndeski theories to Gleyzes-Langlois-Piazza-Vernizzi (GLPV) theories. On the isotropic cosmological background, we show the existence of a constraint with a vanishing Hamiltonian that removes the would-be Ostrogradski ghost. We study the behavior of linear perturbations on top of the isotropic cosmological background in the presence of a matter perfect fluid and find the same number of propagating degrees of freedom as in generalized Proca theories (two tensor polarizations, two transverse vector modes, and two scalar modes). Moreover, we obtain the conditions for the avoidance of ghosts and Laplacian instabilities of tensor, vector, and scalar perturbations. We observe key differences in the scalar sound speed, which is mixed with the matter sound speed outside the domain of generalized Proca theories.
Sano, Masahiro; Sano, Sayaka; Oka, Noriyuki; Yoshino, Kayoko; Kato, Toshinori
2013-12-04
Individuals who habitually breathe through the mouth are more likely than nasal breathers to have sleep disorders and attention deficit hyperactive disorder. We hypothesized that brain hemodynamic responses in the prefrontal cortex might be different for mouth and nasal breathing. To test this hypothesis, we measured changes in oxyhemoglobin and deoxyhemoglobin in the prefrontal cortex during mouth breathing and nasal breathing in healthy adults (n=9) using vector-based near-infrared spectroscopy. The angle k, calculated from changes in oxyhemoglobin and deoxyhemoglobin and indicating the degree of oxygen exchange, was significantly higher during mouth breathing (P<0.05), indicating an increased oxygen load. Mouth breathing also caused a significant increase in deoxyhemoglobin, but oxyhemoglobin did not increase. This difference in oxygen load in the brain arising from different breathing routes can be evaluated quantitatively using vector-based near-infrared spectroscopy. Phase responses could help to provide an earlier and more reliable diagnosis of a patient's habitual breathing route than a patient interview.
Sano, Sayaka; Oka, Noriyuki; Yoshino, Kayoko; Kato, Toshinori
2013-01-01
Individuals who habitually breathe through the mouth are more likely than nasal breathers to have sleep disorders and attention deficit hyperactive disorder. We hypothesized that brain hemodynamic responses in the prefrontal cortex might be different for mouth and nasal breathing. To test this hypothesis, we measured changes in oxyhemoglobin and deoxyhemoglobin in the prefrontal cortex during mouth breathing and nasal breathing in healthy adults (n=9) using vector-based near-infrared spectroscopy. The angle k, calculated from changes in oxyhemoglobin and deoxyhemoglobin and indicating the degree of oxygen exchange, was significantly higher during mouth breathing (P<0.05), indicating an increased oxygen load. Mouth breathing also caused a significant increase in deoxyhemoglobin, but oxyhemoglobin did not increase. This difference in oxygen load in the brain arising from different breathing routes can be evaluated quantitatively using vector-based near-infrared spectroscopy. Phase responses could help to provide an earlier and more reliable diagnosis of a patient’s habitual breathing route than a patient interview. PMID:24169579
Construction of siRNA/miRNA expression vectors based on a one-step PCR process
Xu, Jun; Zeng, Jie Qiong; Wan, Gang; Hu, Gui Bin; Yan, Hong; Ma, Li Xin
2009-01-01
Background RNA interference (RNAi) has become a powerful means for silencing target gene expression in mammalian cells and is envisioned to be useful in therapeutic approaches to human disease. In recent years, high-throughput, genome-wide screening of siRNA/miRNA libraries has emerged as a desirable approach. Current methods for constructing siRNA/miRNA expression vectors require the synthesis of long oligonucleotides, which is costly and suffers from mutation problems. Results Here we report an ingenious method to solve traditional problems associated with construction of siRNA/miRNA expression vectors. We synthesized shorter primers (< 50 nucleotides) to generate a linear expression structure by PCR. The PCR products were directly transformed into chemically competent E. coli and converted to functional vectors in vivo via homologous recombination. The positive clones could be easily screened under UV light. Using this method we successfully constructed over 500 functional siRNA/miRNA expression vectors. Sequencing of the vectors confirmed a high accuracy rate. Conclusion This novel, convenient, low-cost and highly efficient approach may be useful for high-throughput assays of RNAi libraries. PMID:19490634
Fusion of Positive Energy Representations of LSpin(2n)
NASA Astrophysics Data System (ADS)
Toledano-Laredo, V.
2004-09-01
Building upon the Jones-Wassermann program of studying Conformal Field Theory using operator algebraic tools, and the work of A. Wassermann on the loop group of LSU(n) (Invent. Math. 133 (1998), 467-538), we give a solution to the problem of fusion for the loop group of Spin(2n). Our approach relies on the use of A. Connes' tensor product of bimodules over a von Neumann algebra to define a multiplicative operation (Connes fusion) on the (integrable) positive energy representations of a given level. The notion of bimodules arises by restricting these representations to loops with support contained in an interval I of the circle or its complement. We study the corresponding Grothendieck ring and show that fusion with the vector representation is given by the Verlinde rules. The computation rests on 1) the solution of a 6-parameter family of Knizhnik-Zamolodchikhov equations and the determination of its monodromy, 2) the explicit construction of the primary fields of the theory, which allows to prove that they define operator-valued distributions and 3) the algebraic theory of superselection sectors developed by Doplicher-Haag-Roberts.
A new class of N=2 topological amplitudes
NASA Astrophysics Data System (ADS)
Antoniadis, I.; Hohenegger, S.; Narain, K. S.; Sokatchev, E.
2009-12-01
We describe a new class of N=2 topological amplitudes that compute a particular class of BPS terms in the low energy effective supergravity action. Specifically they compute the coupling F(( where F, λ and ϕ are gauge field strengths, gaugino and holomorphic vector multiplet scalars. The novel feature of these terms is that they depend both on the vector and hypermultiplet moduli. The BPS nature of these terms implies that they satisfy a holomorphicity condition with respect to vector moduli and a harmonicity condition as well as a second order differential equation with respect to hypermultiplet moduli. We study these conditions explicitly in heterotic string theory and show that they are indeed satisfied up to anomalous boundary terms in the world-sheet moduli space. We also analyze the boundary terms in the holomorphicity and harmonicity equations at a generic point in the vector and hyper moduli space. In particular we show that the obstruction to the holomorphicity arises from the one loop threshold correction to the gauge couplings and we argue that this is due to the contribution of non-holomorphic couplings to the connected graphs via elimination of the auxiliary fields.
Partitioning Rectangular and Structurally Nonsymmetric Sparse Matrices for Parallel Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
B. Hendrickson; T.G. Kolda
1998-09-01
A common operation in scientific computing is the multiplication of a sparse, rectangular or structurally nonsymmetric matrix and a vector. In many applications the matrix- transpose-vector product is also required. This paper addresses the efficient parallelization of these operations. We show that the problem can be expressed in terms of partitioning bipartite graphs. We then introduce several algorithms for this partitioning problem and compare their performance on a set of test matrices.
Zhang, Wenyan; Zeng, Jing
2017-01-01
An existence result for the solution set of a system of simultaneous generalized vector quasi-equilibrium problems (for short, (SSGVQEP)) is obtained, which improves Theorem 3.1 of the work of Ansari et al. (J. Optim. Theory Appl. 127:27-44, 2005). Moreover, a definition of Hadamard-type well-posedness for (SSGVQEP) is introduced and sufficient conditions for Hadamard well-posedness of (SSGVQEP) are established.
NASA Technical Reports Server (NTRS)
Adams, D. F.; Hartmann, U. G.; Lazarow, L. L.; Maloy, J. O.; Mohler, G. W.
1976-01-01
The design of the vector magnetometer selected for analysis is capable of exceeding the required accuracy of 5 gamma per vector field component. The principal elements that assure this performance level are very low power dissipation triaxial feedback coils surrounding ring core flux-gates and temperature control of the critical components of two-loop feedback electronics. An analysis of the calibration problem points to the need for improved test facilities.
Transduction of satellite cells after prenatal intramuscular administration of lentiviral vectors.
MacKenzie, Tippi C; Kobinger, Gary P; Louboutin, Jean-Pierre; Radu, Antoneta; Javazon, Elizabeth H; Sena-Esteves, Miguel; Wilson, James M; Flake, Alan W
2005-01-01
We have previously reported long-term expression of lacZ in myocytes after in utero intramuscular injection of Mokola and Ebola pseudotyped lentiviral vectors. In further experiments, we have noted that these vectors also transduce small cells at the periphery of the muscle fibers that have the morphology of satellite cells, or muscle stem cells. In this study we performed experiments to further define the morphology and function of these cells. Balb/c mice at 14-15 days gestation were injected intramuscularly with Ebola or Mokola pseudotyped lentiviral vectors carrying CMV-lacZ. Animals were harvested at various time points, muscles were stained with X-gal, and processed for electron microscopy (EM) and immunofluorescence. To determine whether transduced satellite cells were functionally capable of regenerating injured muscles, animals were injected with notexin in the same area 8 weeks after the in utero injection of viral vector. Transmission EM of transduced cells confirmed the ultrastructural appearance of satellite cells. Double immunofluorescence for beta-galactosidase and satellite cell markers demonstrated co-localization of these markers in transduced cells. In the notexin-injured animals, small blue cells were seen at the areas of regeneration that co-localized beta-galactosidase with markers of regenerating satellite cells. Central nucleated blue fibers were seen at late time points, indicating regenerated muscle fibers arising from a transduced satellite cell. This study demonstrates transduction of muscle satellite cells following prenatal viral vector mediated gene transfer. These findings may have important implications for gene therapy strategies directed toward muscular dystrophy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chong; Yang, Zhan-Ying, E-mail: zyyang@nwu.edu.cn; Zhao, Li-Chen, E-mail: zhaolichen3@163.com
We study vector localized waves on continuous wave background with higher-order effects in a two-mode optical fiber. The striking properties of transition, coexistence, and interaction of these localized waves arising from higher-order effects are revealed in combination with corresponding modulation instability (MI) characteristics. It shows that these vector localized wave properties have no analogues in the case without higher-order effects. Specifically, compared to the scalar case, an intriguing transition between bright–dark rogue waves and w-shaped–anti-w-shaped solitons, which occurs as a result of the attenuation of MI growth rate to vanishing in the zero-frequency perturbation region, is exhibited with the relativemore » background frequency. In particular, our results show that the w-shaped–anti-w-shaped solitons can coexist with breathers, coinciding with the MI analysis where the coexistence condition is a mixture of a modulation stability and MI region. It is interesting that their interaction is inelastic and describes a fusion process. In addition, we demonstrate an annihilation phenomenon for the interaction of two w-shaped solitons which is identified essentially as an inelastic collision in this system. -- Highlights: •Vector rogue wave properties induced by higher-order effects are studied. •A transition between vector rogue waves and solitons is obtained. •The link between the transition and modulation instability (MI) is demonstrated. •The coexistence of vector solitons and breathers coincides with the MI features. •An annihilation phenomenon for the vector two w-shaped solitons is presented.« less
Quenching rate for a nonlocal problem arising in the micro-electro mechanical system
NASA Astrophysics Data System (ADS)
Guo, Jong-Shenq; Hu, Bei
2018-03-01
In this paper, we study the quenching rate of the solution for a nonlocal parabolic problem which arises in the study of the micro-electro mechanical system. This question is equivalent to the stabilization of the solution to the transformed problem in self-similar variables. First, some a priori estimates are provided. In order to construct a Lyapunov function, due to the lack of time monotonicity property, we then derive some very useful and challenging estimates by a delicate analysis. Finally, with this Lyapunov function, we prove that the quenching rate is self-similar which is the same as the problem without the nonlocal term, except the constant limit depends on the solution itself.
Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms
NASA Astrophysics Data System (ADS)
Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.
2017-09-01
Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.
Unconventional superconductivity in iron pnictides: Magnon mediated pairing
NASA Astrophysics Data System (ADS)
kar, Raskesh; Paul, Bikash Chandra; Misra, Anirban
2018-02-01
We study the phenomenon of unconventional superconductivity in iron pnictides on the basis of localized-itinerant model. In this proposed model, superconductivity arises from the itinerant part of electrons, whereas antiferromagnetism arises from the localized part. The itinerant electrons move over the sea of localized electrons in antiferromagnetic alignment and interact with them resulting in excitation of magnons. We find that triplet pairing of itinerant electrons via magnons is possible in checkerboard antiferromagnetic spin configuration of the substances CaFe2As2 and BaFe2As2 in pure form for umklapp scattering with scattering wave vector Q =(1 , 1) , in the unit of π/a where a being one orthorhombic crystal parameter, which is the nesting vector between two Fermi surfaces. The interaction potential figured out in this way, increases with the decrease in nearest neighbour (NN) exchange couplings. Under ambient pressure, with stripe antiferromagnetic spin configuration, a very small value of coupling constant is obtained which does not give rise to superconductivity. The critical temperature of superconductivity of the substances CaFe2As2 and BaFe2As2 in higher pressure checkerboard antiferromagnetic spin configuration are found to be 12.12 K and 29.95 K respectively which are in agreement with the experimental results.
Golden Ratio in a Coupled-Oscillator Problem
ERIC Educational Resources Information Center
Moorman, Crystal M.; Goff, John Eric
2007-01-01
The golden ratio appears in a classical mechanics coupled-oscillator problem that many undergraduates may not solve. Once the symmetry is broken in a more standard problem, the golden ratio appears. Several student exercises arise from the problem considered in this paper.
Compactly supported Wannier functions and algebraic K -theory
NASA Astrophysics Data System (ADS)
Read, N.
2017-03-01
In a tight-binding lattice model with n orbitals (single-particle states) per site, Wannier functions are n -component vector functions of position that fall off rapidly away from some location, and such that a set of them in some sense span all states in a given energy band or set of bands; compactly supported Wannier functions are such functions that vanish outside a bounded region. They arise not only in band theory, but also in connection with tensor-network states for noninteracting fermion systems, and for flat-band Hamiltonians with strictly short-range hopping matrix elements. In earlier work, it was proved that for general complex band structures (vector bundles) or general complex Hamiltonians—that is, class A in the tenfold classification of Hamiltonians and band structures—a set of compactly supported Wannier functions can span the vector bundle only if the bundle is topologically trivial, in any dimension d of space, even when use of an overcomplete set of such functions is permitted. This implied that, for a free-fermion tensor network state with a nontrivial bundle in class A, any strictly short-range parent Hamiltonian must be gapless. Here, this result is extended to all ten symmetry classes of band structures without additional crystallographic symmetries, with the result that in general the nontrivial bundles that can arise from compactly supported Wannier-type functions are those that may possess, in each of d directions, the nontrivial winding that can occur in the same symmetry class in one dimension, but nothing else. The results are obtained from a very natural usage of algebraic K -theory, based on a ring of polynomials in e±i kx,e±i ky,..., which occur as entries in the Fourier-transformed Wannier functions.
Calculation of Rayleigh type sums for zeros of the equation arising in spectral problem
NASA Astrophysics Data System (ADS)
Kostin, A. B.; Sherstyukov, V. B.
2017-12-01
For zeros of the equation (arising in the oblique derivative problem) μ J n ‧ ( μ ) cos α + i n J n ( μ ) sin α = 0 , μ ∈ ℂ , with parameters n ∈ ℤ, α ∈ [-π/2, π/2] and the Bessel function Jn (μ) special summation relationships are proved. The obtained results are consistent with the theory of well-known Rayleigh sums calculating by zeros of the Bessel function.
A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations
Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; ...
2017-06-01
As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less
A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel
As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less
The measurement of linear frequency drift in oscillators
NASA Astrophysics Data System (ADS)
Barnes, J. A.
1985-04-01
A linear drift in frequency is an important element in most stochastic models of oscillator performance. Quartz crystal oscillators often have drifts in excess of a part in ten to the tenth power per day. Even commercial cesium beam devices often show drifts of a few parts in ten to the thirteenth per year. There are many ways to estimate the drift rates from data samples (e.g., regress the phase on a quadratic; regress the frequency on a linear; compute the simple mean of the first difference of frequency; use Kalman filters with a drift term as one element in the state vector; and others). Although most of these estimators are unbiased, they vary in efficiency (i.e., confidence intervals). Further, the estimation of confidence intervals using the standard analysis of variance (typically associated with the specific estimating technique) can give amazingly optimistic results. The source of these problems is not an error in, say, the regressions techniques, but rather the problems arise from correlations within the residuals. That is, the oscillator model is often not consistent with constraints on the analysis technique or, in other words, some specific analysis techniques are often inappropriate for the task at hand. The appropriateness of a specific analysis technique is critically dependent on the oscillator model and can often be checked with a simple whiteness test on the residuals.
NASA Technical Reports Server (NTRS)
Gilbertsen, Noreen D.; Belytschko, Ted
1990-01-01
The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.
Link-Based Similarity Measures Using Reachability Vectors
Yoon, Seok-Ho; Kim, Ji-Soo; Ryu, Minsoo; Choi, Ho-Jin
2014-01-01
We present a novel approach for computing link-based similarities among objects accurately by utilizing the link information pertaining to the objects involved. We discuss the problems with previous link-based similarity measures and propose a novel approach for computing link based similarities that does not suffer from these problems. In the proposed approach each target object is represented by a vector. Each element of the vector corresponds to all the objects in the given data, and the value of each element denotes the weight for the corresponding object. As for this weight value, we propose to utilize the probability of reaching from the target object to the specific object, computed using the “Random Walk with Restart” strategy. Then, we define the similarity between two objects as the cosine similarity of the two vectors. In this paper, we provide examples to show that our approach does not suffer from the aforementioned problems. We also evaluate the performance of the proposed methods in comparison with existing link-based measures, qualitatively and quantitatively, with respect to two kinds of data sets, scientific papers and Web documents. Our experimental results indicate that the proposed methods significantly outperform the existing measures. PMID:24701188
Brady, Oliver J.; Godfray, H. Charles J.; Tatem, Andrew J.; Gething, Peter W.; Cohen, Justin M.; McKenzie, F. Ellis; Perkins, T. Alex; Reiner, Robert C.; Tusting, Lucy S.; Sinka, Marianne E.; Moyes, Catherine L.; Eckhoff, Philip A.; Scott, Thomas W.; Lindsay, Steven W.; Hay, Simon I.; Smith, David L.
2016-01-01
Background Major gains have been made in reducing malaria transmission in many parts of the world, principally by scaling-up coverage with long-lasting insecticidal nets and indoor residual spraying. Historically, choice of vector control intervention has been largely guided by a parameter sensitivity analysis of George Macdonald's theory of vectorial capacity that suggested prioritizing methods that kill adult mosquitoes. While this advice has been highly successful for transmission suppression, there is a need to revisit these arguments as policymakers in certain areas consider which combinations of interventions are required to eliminate malaria. Methods and Results Using analytical solutions to updated equations for vectorial capacity we build on previous work to show that, while adult killing methods can be highly effective under many circumstances, other vector control methods are frequently required to fill effective coverage gaps. These can arise due to pre-existing or developing mosquito physiological and behavioral refractoriness but also due to additive changes in the relative importance of different vector species for transmission. Furthermore, the optimal combination of interventions will depend on the operational constraints and costs associated with reaching high coverage levels with each intervention. Conclusions Reaching specific policy goals, such as elimination, in defined contexts requires increasingly non-generic advice from modelling. Our results emphasize the importance of measuring baseline epidemiology, intervention coverage, vector ecology and program operational constraints in predicting expected outcomes with different combinations of interventions. PMID:26822603
Pre-vector variational inequality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Lai-Jiu
1994-12-31
Let X be a Hausdorff topological vector space, (Y, D) be an ordered Hausdorff topological vector space ordered by convex cone D. Let L(X, Y) be the space of all bounded linear operator, E {improper_subset} X be a nonempty set, T : E {yields} L(X, Y), {eta} : E {times} E {yields} E be functions. For x, y {element_of} Y, we denote x {not_lt} y if y - x intD, where intD is the interior of D. We consider the following two problems: Find x {element_of} E such that < T(x), {eta}(y, x) > {not_lt} 0 for all y {element_of}more » E and find x {element_of} E, < T(x), {eta}(y, x) > {not_gt} 0 for all y {element_of} E and < T(x), {eta}(y, x) >{element_of} C{sub p}{sup w+} = {l_brace} {element_of} L(X, Y) {vert_bar}< l, {eta}(x, 0) >{not_lt} 0 for all x {element_of} E{r_brace} where < T(x), y > denotes linear operator T(x) at y, that is T(x), (y). We called Pre-VVIP the Pre-vector variational inequality problem and Pre-VCP complementary problem. If X = R{sup n}, Y = R, D = R{sub +} {eta}(y, x) = y - x, then our problem is the well-known variational inequality first studies by Hartman and Stampacchia. If Y = R, D = R{sub +}, {eta}(y, x) = y - x, our problem is the variational problem in infinite dimensional space. In this research, we impose different condition on T(x), {eta}, X, and < T(x), {eta}(y, x) > and investigate the existences theorem of these problems. As an application of one of our results, we establish the existence theorem of weak minimum of the problem. (P) V - min f(x) subject to x {element_of} E where f : X {yields} Y si a Frechet differentiable invex function.« less
Xie, Hong-Bo; Huang, Hu; Wu, Jianhua; Liu, Lei
2015-02-01
We present a multiclass fuzzy relevance vector machine (FRVM) learning mechanism and evaluate its performance to classify multiple hand motions using surface electromyographic (sEMG) signals. The relevance vector machine (RVM) is a sparse Bayesian kernel method which avoids some limitations of the support vector machine (SVM). However, RVM still suffers the difficulty of possible unclassifiable regions in multiclass problems. We propose two fuzzy membership function-based FRVM algorithms to solve such problems, based on experiments conducted on seven healthy subjects and two amputees with six hand motions. Two feature sets, namely, AR model coefficients and room mean square value (AR-RMS), and wavelet transform (WT) features, are extracted from the recorded sEMG signals. Fuzzy support vector machine (FSVM) analysis was also conducted for wide comparison in terms of accuracy, sparsity, training and testing time, as well as the effect of training sample sizes. FRVM yielded comparable classification accuracy with dramatically fewer support vectors in comparison with FSVM. Furthermore, the processing delay of FRVM was much less than that of FSVM, whilst training time of FSVM much faster than FRVM. The results indicate that FRVM classifier trained using sufficient samples can achieve comparable generalization capability as FSVM with significant sparsity in multi-channel sEMG classification, which is more suitable for sEMG-based real-time control applications.
Common Methodological Problems in Research on the Addictions.
ERIC Educational Resources Information Center
Nathan, Peter E.; Lansky, David
1978-01-01
Identifies common problems in research on the addictions and offers suggestions for remediating these methodological problems. The addictions considered include alcoholism and drug dependencies. Problems considered are those arising from inadequate, incomplete, or biased reviews of relevant literatures and methodological shortcomings of subject…
Gu, Rui; Xu, Jinglei
2014-01-01
The dual throat nozzle (DTN) technique is capable to achieve higher thrust-vectoring efficiencies than other fluidic techniques, without compromising thrust efficiency significantly during vectoring operation. The excellent performance of the DTN is mainly due to the concaved cavity. In this paper, two DTNs of different scales have been investigated by unsteady numerical simulations to compare the parameter variations and study the effects of cavity during the vector starting process. The results remind us that during the vector starting process, dynamic loads may be generated, which is a potentially challenging problem for the aircraft trim and control.
Hadronic three-body decays of B mesons
NASA Astrophysics Data System (ADS)
Cheng, Hai-Yang
2016-04-01
Hadronic three-body decays of B mesons receive both resonant and nonresonant contributions. Dominant nonresonant contributions to tree-dominated three-body decays arise from the b → u tree transition which can be evaluated using heavy meson chiral perturbation theory valid in the soft meson limit. For penguin-dominated decays, nonresonant signals come mainly from the penguin amplitude governed by the matrix elements of scalar densities
Elastic Gauge Fields in Weyl Semimetals
NASA Astrophysics Data System (ADS)
Cortijo, Alberto; Ferreiros, Yago; Landsteiner, Karl; Hernandez Vozmediano, Maria Angeles
We show that, as it happens in graphene, elastic deformations couple to the electronic degrees of freedom as pseudo gauge fields in Weyl semimetals. We derive the form of the elastic gauge fields in a tight-binding model hosting Weyl nodes and see that this vector electron-phonon coupling is chiral, providing an example of axial gauge fields in three dimensions. As an example of the new response functions that arise associated to these elastic gauge fields, we derive a non-zero phonon Hall viscosity for the neutral system at zero temperature. The axial nature of the fields provides a test of the chiral anomaly in high energy with three axial vector couplings. European Union structural funds and the Comunidad de Madrid MAD2D-CM Program (S2013/MIT-3007).
Object recognition of real targets using modelled SAR images
NASA Astrophysics Data System (ADS)
Zherdev, D. A.
2017-12-01
In this work the problem of recognition is studied using SAR images. The algorithm of recognition is based on the computation of conjugation indices with vectors of class. The support subspaces for each class are constructed by exception of the most and the less correlated vectors in a class. In the study we examine the ability of a significant feature vector size reduce that leads to recognition time decrease. The images of targets form the feature vectors that are transformed using pre-trained convolutional neural network (CNN).
New Term Weighting Formulas for the Vector Space Method in Information Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chisholm, E.; Kolda, T.G.
The goal in information retrieval is to enable users to automatically and accurately find data relevant to their queries. One possible approach to this problem i use the vector space model, which models documents and queries as vectors in the term space. The components of the vectors are determined by the term weighting scheme, a function of the frequencies of the terms in the document or query as well as throughout the collection. We discuss popular term weighting schemes and present several new schemes that offer improved performance.
A multistage motion vector processing method for motion-compensated frame interpolation.
Huang, Ai- Mei; Nguyen, Truong Q
2008-05-01
In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.
Solution of a large hydrodynamic problem using the STAR-100 computer
NASA Technical Reports Server (NTRS)
Weilmuenster, K. J.; Howser, L. M.
1976-01-01
A representative hydrodynamics problem, the shock initiated flow over a flat plate, was used for exploring data organizations and program structures needed to exploit the STAR-100 vector processing computer. A brief description of the problem is followed by a discussion of how each portion of the computational process was vectorized. Finally, timings of different portions of the program are compared with equivalent operations on serial machines. The speed up of the STAR-100 over the CDC 6600 program is shown to increase as the problem size increases. All computations were carried out on a CDC 6600 and a CDC STAR 100, with code written in FORTRAN for the 6600 and in STAR FORTRAN for the STAR 100.
Intertwined Hamiltonians in two-dimensional curved spaces
NASA Astrophysics Data System (ADS)
Aghababaei Samani, Keivan; Zarei, Mina
2005-04-01
The problem of intertwined Hamiltonians in two-dimensional curved spaces is investigated. Explicit results are obtained for Euclidean plane, Minkowski plane, Poincaré half plane (AdS2), de Sitter plane (dS2), sphere, and torus. It is shown that the intertwining operator is related to the Killing vector fields and the isometry group of corresponding space. It is shown that the intertwined potentials are closely connected to the integral curves of the Killing vector fields. Two problems are considered as applications of the formalism presented in the paper. The first one is the problem of Hamiltonians with equispaced energy levels and the second one is the problem of Hamiltonians whose spectrum is like the spectrum of a free particle.
Low thrust propulsion system effects on communication satellites.
NASA Technical Reports Server (NTRS)
Hall, D. F.; Lyon, W. C.
1972-01-01
Choice of type and placement of thrusters on spacecraft (s/c) should include consideration of their effects on other subsystems. Models are presented of the exhaust plumes of mercury, cesium, colloid, hydrazine, ammonia, and Teflon rockets. Effects arising from plume impingement on s/c surfaces, radio frequency interference, optical interference, and earth environmental contamination are discussed. Some constraints arise in the placement of mercury, cesium, and Teflon thrusters. Few problems exist with other thruster types, nor is earth contamination a problem.
The use of Lanczos's method to solve the large generalized symmetric definite eigenvalue problem
NASA Technical Reports Server (NTRS)
Jones, Mark T.; Patrick, Merrell L.
1989-01-01
The generalized eigenvalue problem, Kx = Lambda Mx, is of significant practical importance, especially in structural enginering where it arises as the vibration and buckling problem. A new algorithm, LANZ, based on Lanczos's method is developed. LANZ uses a technique called dynamic shifting to improve the efficiency and reliability of the Lanczos algorithm. A new algorithm for solving the tridiagonal matrices that arise when using Lanczos's method is described. A modification of Parlett and Scott's selective orthogonalization algorithm is proposed. Results from an implementation of LANZ on a Convex C-220 show it to be superior to a subspace iteration code.
Topical Meeting on Optical Bistability Held at Rochester, New York on 15-17 June 1983.
1983-01-01
distortion of their initial directions of polarization : both of the beams are linearly polarized , with their electric vectors either (i)parallel to...New Zealand. ChSAM aIB ct Multistability, self-oscillation, and chaos in a model for polarization I Chas mnd Optlcal Bltabillty: Blfuraton...second circularly polarized pumping beam has been observed, transition sequence arises that is consistent with recent observ- Sense of response
Diffraction-induced instability of coupled dark solitary waves.
Assanto, Gaetano; MacNeil, J Michael L; Smyth, Noel F
2015-04-15
We report on a novel instability arising from the propagation of coupled dark solitary beams governed by coupled defocusing nonlinear Schrödinger equations. Considering dark notches on backgrounds with different wavelengths, hence different diffraction coefficients, we find that the vector dark soliton solution is unstable to radiation modes. Using perturbation theory and numerical integration, we demonstrate that the component undergoing stronger diffraction radiates away, leaving a single dark soliton in the other mode/wavelength.
Resurgent vector-borne diseases as a global health problem.
Gubler, D. J.
1998-01-01
Vector-borne infectious diseases are emerging or resurging as a result of changes in public health policy, insecticide and drug resistance, shift in emphasis from prevention to emergency response, demographic and societal changes, and genetic changes in pathogens. Effective prevention strategies can reverse this trend. Research on vaccines, environmentally safe insecticides, alternative approaches to vector control, and training programs for health-care workers are needed. PMID:9716967
The vector homology problem in diagnostic nucleic acid hybridization of clinical specimens.
Ambinder, R F; Charache, P; Staal, S; Wright, P; Forman, M; Hayward, S D; Hayward, G S
1986-01-01
Nucleic acid hybridization techniques using cloned probes are finding application in assays of clinical specimens in research and diagnostic laboratories. The probes that we and others have used are recombinant plasmids composed of viral inserts and bacterial plasmid vectors such as pBR322. We suspected that there was material homologous to pBR322 present in many clinical samples. because hybridization occurred in samples which lacked evidence of virus by other techniques. If the presence of this vector-homologous material was unrecognized, hybridization in the test sample might erroneously be interpreted as indicating the presence of viral sequences. In this paper we demonstrate specific hybridization of labeled pBR322 DNA with DNA from various clinical samples. Evidence is presented that nonspecific probe trapping could not account for this phenomenon. In mixing experiments, it is shown that contamination of clinical samples with bacteria would explain such a result. Approaches tested to circumvent this problem included the use of isolated insert probes, alternate cloning vectors, and cold competitor pBR322 DNA in prehybridization and hybridization mixes. None proved entirely satisfactory. We therefore emphasize that it is essential that all hybridization detection systems use a control probe of the vector alone in order to demonstrate the absence of material with vector homology in the specimen tested. Images PMID:3013928
The Creation of Space Vector Models of Buildings From RPAS Photogrammetry Data
NASA Astrophysics Data System (ADS)
Trhan, Ondrej
2017-06-01
The results of Remote Piloted Aircraft System (RPAS) photogrammetry are digital surface models and orthophotos. The main problem of the digital surface models obtained is that buildings are not perpendicular and the shape of roofs is deformed. The task of this paper is to obtain a more accurate digital surface model using building reconstructions. The paper discusses the problem of obtaining and approximating building footprints, reconstructing the final spatial vector digital building model, and modifying the buildings on the digital surface model.
Lie theory and control systems defined on spheres
NASA Technical Reports Server (NTRS)
Brockett, R. W.
1972-01-01
It is shown that in constructing a theory for the most elementary class of control problems defined on spheres, some results from the Lie theory play a natural role. To understand controllability, optimal control, and certain properties of stochastic equations, Lie theoretic ideas are needed. The framework considered here is the most natural departure from the usual linear system/vector space problems which have dominated control systems literature. For this reason results are compared with those previously available for the finite dimensional vector space case.
Solving the multi-frequency electromagnetic inverse source problem by the Fourier method
NASA Astrophysics Data System (ADS)
Wang, Guan; Ma, Fuming; Guo, Yukun; Li, Jingzhi
2018-07-01
This work is concerned with an inverse problem of identifying the current source distribution of the time-harmonic Maxwell's equations from multi-frequency measurements. Motivated by the Fourier method for the scalar Helmholtz equation and the polarization vector decomposition, we propose a novel method for determining the source function in the full vector Maxwell's system. Rigorous mathematical justifications of the method are given and numerical examples are provided to demonstrate the feasibility and effectiveness of the method.
Progress toward a circulation atlas for application to coastal water siting problems
NASA Technical Reports Server (NTRS)
Munday, J. C., Jr.; Gordon, H. H.
1978-01-01
Circulation data needed to resolve coastal siting problems are assembled from historical hydrographic and remote sensing studies in the form of a Circulation Atlas. Empirical data are used instead of numerical model simulations to achieve fine resolution and include fronts and convergence zones. Eulerian and Langrangian data are collected, transformed, and combined into trajectory maps and current vector maps as a function of tidal phase and wind vector. Initial Atlas development is centered on the Elizabeth River, Hampton Roads, Virgina.
NASA Technical Reports Server (NTRS)
Garcia, F., Jr.
1975-01-01
This paper presents a solution to a complex lifting reentry three-degree-of-freedom problem by using the calculus of variations to minimize the integral of the sum of the aerodynamics loads and heat rate input to the vehicle. The entry problem considered does not have state and/or control constraints along the trajectory. The calculus of variations method applied to this problem gives rise to a set of necessary conditions which are used to formulate a two point boundary value (TPBV) problem. This TPBV problem is then numerically solved by an improved method of perturbation functions (IMPF) using several starting co-state vectors. These vectors were chosen so that each one had a larger norm with respect to show how the envelope of convergence is significantly increased using this method and cases are presented to point this out.
Flux-vector splitting algorithm for chain-rule conservation-law form
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Nguyen, H. L.; Willis, E. A.; Steinthorsson, E.; Li, Z.
1991-01-01
A flux-vector splitting algorithm with Newton-Raphson iteration was developed for the 'full compressible' Navier-Stokes equations cast in chain-rule conservation-law form. The algorithm is intended for problems with deforming spatial domains and for problems whose governing equations cannot be cast in strong conservation-law form. The usefulness of the algorithm for such problems was demonstrated by applying it to analyze the unsteady, two- and three-dimensional flows inside one combustion chamber of a Wankel engine under nonfiring conditions. Solutions were obtained to examine the algorithm in terms of conservation error, robustness, and ability to handle complex flows on time-dependent grid systems.
NASA Astrophysics Data System (ADS)
Gorgizadeh, Shahnam; Flisgen, Thomas; van Rienen, Ursula
2018-07-01
Generalized eigenvalue problems are standard problems in computational sciences. They may arise in electromagnetic fields from the discretization of the Helmholtz equation by for example the finite element method (FEM). Geometrical perturbations of the structure under concern lead to a new generalized eigenvalue problems with different system matrices. Geometrical perturbations may arise by manufacturing tolerances, harsh operating conditions or during shape optimization. Directly solving the eigenvalue problem for each perturbation is computationally costly. The perturbed eigenpairs can be approximated using eigenpair derivatives. Two common approaches for the calculation of eigenpair derivatives, namely modal superposition method and direct algebraic methods, are discussed in this paper. Based on the direct algebraic methods an iterative algorithm is developed for efficiently calculating the eigenvalues and eigenvectors of the perturbed geometry from the eigenvalues and eigenvectors of the unperturbed geometry.
"Analytical" vector-functions I
NASA Astrophysics Data System (ADS)
Todorov, Vladimir Todorov
2017-12-01
In this note we try to give a new (or different) approach to the investigation of analytical vector functions. More precisely a notion of a power xn; n ∈ ℕ+ of a vector x ∈ ℝ3 is introduced which allows to define an "analytical" function f : ℝ3 → ℝ3. Let furthermore f (ξ )= ∑n =0 ∞ anξn be an analytical function of the real variable ξ. Here we replace the power ξn of the number ξ with the power of a vector x ∈ ℝ3 to obtain a vector "power series" f (x )= ∑n =0 ∞ anxn . We research some properties of the vector series as well as some applications of this idea. Note that an "analytical" vector function does not depend of any basis, which may be used in research into some problems in physics.
Soft and hard classification by reproducing kernel Hilbert space methods.
Wahba, Grace
2002-12-24
Reproducing kernel Hilbert space (RKHS) methods provide a unified context for solving a wide variety of statistical modelling and function estimation problems. We consider two such problems: We are given a training set [yi, ti, i = 1, em leader, n], where yi is the response for the ith subject, and ti is a vector of attributes for this subject. The value of y(i) is a label that indicates which category it came from. For the first problem, we wish to build a model from the training set that assigns to each t in an attribute domain of interest an estimate of the probability pj(t) that a (future) subject with attribute vector t is in category j. The second problem is in some sense less ambitious; it is to build a model that assigns to each t a label, which classifies a future subject with that t into one of the categories or possibly "none of the above." The approach to the first of these two problems discussed here is a special case of what is known as penalized likelihood estimation. The approach to the second problem is known as the support vector machine. We also note some alternate but closely related approaches to the second problem. These approaches are all obtained as solutions to optimization problems in RKHS. Many other problems, in particular the solution of ill-posed inverse problems, can be obtained as solutions to optimization problems in RKHS and are mentioned in passing. We caution the reader that although a large literature exists in all of these topics, in this inaugural article we are selectively highlighting work of the author, former students, and other collaborators.
The Riemann-Hilbert problem for nonsymmetric systems
NASA Astrophysics Data System (ADS)
Greenberg, W.; Zweifel, P. F.; Paveri-Fontana, S.
1991-12-01
A comparison of the Riemann-Hilbert problem and the Wiener-Hopf factorization problem arising in the solution of half-space singular integral equations is presented. Emphasis is on the factorization of functions lacking the reflection symmetry usual in transport theory.
Frozen orbit realization using LQR analogy
NASA Astrophysics Data System (ADS)
Nagarajan, N.; Rayan, H. Reno
In the case of remote sensing orbits, the Frozen Orbit concept minimizes altitude variations over a given region using passive means. This is achieved by establishing the mean eccentricity vector at the orbital poles i.e., by fixing the mean argument of perigee at 90 deg with an appropriate eccentricity to balance the perturbations due to zonal harmonics J2 and J3 of the Earth's potential. Eccentricity vector is a vector whose magnitude is the eccentricity and direction is the argument of perigee. The launcher dispersions result in an eccentricity vector which is away from the frozen orbit values. The objective is then to formulate an orbit maneuver strategy to optimize the fuel required to achieve the frozen orbit in the presence of visibility and impulse constraints. It is shown that the motion of the eccentricity vector around the frozen perigee can be approximated as a circle. Combining the circular motion of the eccentricity vector around the frozen point and the maneuver equation, the following discrete equation is obtained. X(k+1) = AX(k) + Bu(k), where X is the state (i.e. eccentricity vector components), A the state transition matrix, u the scalar control force (i.e. dV in this case) and B the control matrix which transforms dV into eccentricity vector change. Based on this, it is shown that the problem of optimizing the fuel can be treated as a Linear Quadratic Regulator (LQR) problem in which the maneuver can be solved by using control system design tools like MATLAB by deriving an analogy LQR design.
Effective matrix-free preconditioning for the augmented immersed interface method
NASA Astrophysics Data System (ADS)
Xia, Jianlin; Li, Zhilin; Ye, Xin
2015-12-01
We present effective and efficient matrix-free preconditioning techniques for the augmented immersed interface method (AIIM). AIIM has been developed recently and is shown to be very effective for interface problems and problems on irregular domains. GMRES is often used to solve for the augmented variable(s) associated with a Schur complement A in AIIM that is defined along the interface or the irregular boundary. The efficiency of AIIM relies on how quickly the system for A can be solved. For some applications, there are substantial difficulties involved, such as the slow convergence of GMRES (particularly for free boundary and moving interface problems), and the inconvenience in finding a preconditioner (due to the situation that only the products of A and vectors are available). Here, we propose matrix-free structured preconditioning techniques for AIIM via adaptive randomized sampling, using only the products of A and vectors to construct a hierarchically semiseparable matrix approximation to A. Several improvements over existing schemes are shown so as to enhance the efficiency and also avoid potential instability. The significance of the preconditioners includes: (1) they do not require the entries of A or the multiplication of AT with vectors; (2) constructing the preconditioners needs only O (log N) matrix-vector products and O (N) storage, where N is the size of A; (3) applying the preconditioners needs only O (N) flops; (4) they are very flexible and do not require any a priori knowledge of the structure of A. The preconditioners are observed to significantly accelerate the convergence of GMRES, with heuristical justifications of the effectiveness. Comprehensive tests on several important applications are provided, such as Navier-Stokes equations on irregular domains with traction boundary conditions, interface problems in incompressible flows, mixed boundary problems, and free boundary problems. The preconditioning techniques are also useful for several other problems and methods.
NASA Astrophysics Data System (ADS)
Kit Luk, Chuen; Chesi, Graziano
2015-11-01
This paper addresses the estimation of the domain of attraction for discrete-time nonlinear systems where the vector field is subject to changes. First, the paper considers the case of switched systems, where the vector field is allowed to arbitrarily switch among the elements of a finite family. Second, the paper considers the case of hybrid systems, where the state space is partitioned into several regions described by polynomial inequalities, and the vector field is defined on each region independently from the other ones. In both cases, the problem consists of computing the largest sublevel set of a Lyapunov function included in the domain of attraction. An approach is proposed for solving this problem based on convex programming, which provides a guaranteed inner estimate of the sought sublevel set. The conservatism of the provided estimate can be decreased by increasing the size of the optimisation problem. Some numerical examples illustrate the proposed approach.
NASA Astrophysics Data System (ADS)
Balankin, Alexander S.; Bory-Reyes, Juan; Shapiro, Michael
2016-02-01
One way to deal with physical problems on nowhere differentiable fractals is the mapping of these problems into the corresponding problems for continuum with a proper fractal metric. On this way different definitions of the fractal metric were suggested to account for the essential fractal features. In this work we develop the metric differential vector calculus in a three-dimensional continuum with a non-Euclidean metric. The metric differential forms and Laplacian are introduced, fundamental identities for metric differential operators are established and integral theorems are proved by employing the metric version of the quaternionic analysis for the Moisil-Teodoresco operator, which has been introduced and partially developed in this paper. The relations between the metric and conventional operators are revealed. It should be emphasized that the metric vector calculus developed in this work provides a comprehensive mathematical formalism for the continuum with any suitable definition of fractal metric. This offers a novel tool to study physics on fractals.
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Du, Xining; Ji, Wei; Xu, X. George; Brown, Forrest B.
2014-06-01
For nuclear reactor analysis such as the neutron eigenvalue calculations, the time consuming Monte Carlo (MC) simulations can be accelerated by using graphics processing units (GPUs). However, traditional MC methods are often history-based, and their performance on GPUs is affected significantly by the thread divergence problem. In this paper we describe the development of a newly designed event-based vectorized MC algorithm for solving the neutron eigenvalue problem. The code was implemented using NVIDIA's Compute Unified Device Architecture (CUDA), and tested on a NVIDIA Tesla M2090 GPU card. We found that although the vectorized MC algorithm greatly reduces the occurrence of thread divergence thus enhancing the warp execution efficiency, the overall simulation speed is roughly ten times slower than the history-based MC code on GPUs. Profiling results suggest that the slow speed is probably due to the memory access latency caused by the large amount of global memory transactions. Possible solutions to improve the code efficiency are discussed.
Hypergraph-Based Combinatorial Optimization of Matrix-Vector Multiplication
ERIC Educational Resources Information Center
Wolf, Michael Maclean
2009-01-01
Combinatorial scientific computing plays an important enabling role in computational science, particularly in high performance scientific computing. In this thesis, we will describe our work on optimizing matrix-vector multiplication using combinatorial techniques. Our research has focused on two different problems in combinatorial scientific…
NASA Technical Reports Server (NTRS)
Schuler, James J.; Felippa, Carlos A.
1991-01-01
Electromagnetic finite elements are extended based on a variational principle that uses the electromagnetic four potential as primary variable. The variational principle is extended to include the ability to predict a nonlinear current distribution within a conductor. The extension of this theory is first done on a normal conductor and tested on two different problems. In both problems, the geometry remains the same, but the material properties are different. The geometry is that of a 1-D infinite wire. The first problem is merely a linear control case used to validate the new theory. The second problem is made up of linear conductors with varying conductivities. Both problems perform well and predict current densities that are accurate to within a few ten thousandths of a percent of the exact values. The fourth potential is then removed, leaving only the magnetic vector potential, and the variational principle is further extended to predict magnetic potentials, magnetic fields, the number of charge carriers, and the current densities within a superconductor. The new element produces good results for the mean magnetic field, the vector potential, and the number of superconducting charge carriers despite a relatively high system condition number. The element did not perform well in predicting the current density. Numerical problems inherent to this formulation are explored and possible remedies to produce better current predicting finite elements are presented.
2013-05-28
those of the support vector machine and relevance vector machine, and the model runs more quickly than the other algorithms . When one class occurs...incremental support vector machine algorithm for online learning when fewer than 50 data points are available. (a) Papers published in peer-reviewed journals...learning environments, where data processing occurs one observation at a time and the classification algorithm improves over time with new
A Comparison of Solver Performance for Complex Gastric Electrophysiology Models
Sathar, Shameer; Cheng, Leo K.; Trew, Mark L.
2016-01-01
Computational techniques for solving systems of equations arising in gastric electrophysiology have not been studied for efficient solution process. We present a computationally challenging problem of simulating gastric electrophysiology in anatomically realistic stomach geometries with multiple intracellular and extracellular domains. The multiscale nature of the problem and mesh resolution required to capture geometric and functional features necessitates efficient solution methods if the problem is to be tractable. In this study, we investigated and compared several parallel preconditioners for the linear systems arising from tetrahedral discretisation of electrically isotropic and anisotropic problems, with and without stimuli. The results showed that the isotropic problem was computationally less challenging than the anisotropic problem and that the application of extracellular stimuli increased workload considerably. Preconditioning based on block Jacobi and algebraic multigrid solvers were found to have the best overall solution times and least iteration counts, respectively. The algebraic multigrid preconditioner would be expected to perform better on large problems. PMID:26736543
Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU
NASA Astrophysics Data System (ADS)
Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.
1982-06-01
In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.
Problem-Solving during Shared Reading at Kindergarten
ERIC Educational Resources Information Center
Gosen, Myrte N.; Berenst, Jan; de Glopper, Kees
2015-01-01
This paper reports on a conversation analytic study of problem-solving interactions during shared reading at three kindergartens in the Netherlands. It illustrates how teachers and pupils discuss book characters' problems that arise in the events in the picture books. A close analysis of the data demonstrates that problem-solving interactions do…
Transnational Environmental Problems--The United States, Canada, Mexico.
ERIC Educational Resources Information Center
Wilcher, Marshall E.
1983-01-01
Examines problems associated with transboundary environmental pollution, focusing on problems arising between the United States and Mexico and between the United States and Canada. Also discusses new organizational forms developed to bring transboundary issues to a higher policy-making level. (JN)
NASA Astrophysics Data System (ADS)
Cross, Rod
2018-03-01
Experimental and theoretical results are presented concerning the rise of a spinning egg. It was found that an egg rises quickly while it is sliding and then more slowly when it starts rolling. The angular momentum of the egg projected in the XZ plane changed in the same direction as the friction torque, as expected, by rotating away from the vertical Z axis. The latter result does not explain the rise. However, an even larger effect arises from the Y component of the angular momentum vector. As the egg rises, the egg rotates about the Y axis, an effect that is closely analogous to rotation of the egg about the Z axis. Both effects can be described in terms of precession about the respective axes. Steady precession about the Z axis arises from the normal reaction force in the Z direction, while precession about the Y axis arises from the friction force in the Y direction. Precession about the Z axis ceases if the normal reaction force decreases to zero, and precession about the Y axis ceases if the friction force decreases to zero.
Nabi, Razieh; Shpitser, Ilya
2017-01-01
In this paper, we consider the problem of fair statistical inference involving outcome variables. Examples include classification and regression problems, and estimating treatment effects in randomized trials or observational data. The issue of fairness arises in such problems where some covariates or treatments are “sensitive,” in the sense of having potential of creating discrimination. In this paper, we argue that the presence of discrimination can be formalized in a sensible way as the presence of an effect of a sensitive covariate on the outcome along certain causal pathways, a view which generalizes (Pearl 2009). A fair outcome model can then be learned by solving a constrained optimization problem. We discuss a number of complications that arise in classical statistical inference due to this view and provide workarounds based on recent work in causal and semi-parametric inference.
Distinctions between fraud, bias, errors, misunderstanding, and incompetence.
DeMets, D L
1997-12-01
Randomized clinical trials are challenging not only in their design and analysis, but in their conduct as well. Despite the best intentions and efforts, problems often arise in the conduct of trials, including errors, misunderstandings, and bias. In some instances, key players in a trial may discover that they are not able or competent to meet requirements of the study. In a few cases, fraudulent activity occurs. While none of these problems is desirable, randomized clinical trials are usually found sufficiently robust by many key individuals to produce valid results. Other problems are not tolerable. Confusion may arise among scientists, scientific and lay press, and the public about the distinctions between these areas and their implications. We shall try to define these problems and illustrate their impact through a series of examples.
A non-local free boundary problem arising in a theory of financial bubbles
Berestycki, Henri; Monneau, Regis; Scheinkman, José A.
2014-01-01
We consider an evolution non-local free boundary problem that arises in the modelling of speculative bubbles. The solution of the model is the speculative component in the price of an asset. In the framework of viscosity solutions, we show the existence and uniqueness of the solution. We also show that the solution is convex in space, and establish several monotonicity properties of the solution and of the free boundary with respect to parameters of the problem. To study the free boundary, we use, in particular, the fact that the odd part of the solution solves a more standard obstacle problem. We show that the free boundary is and describe the asymptotics of the free boundary as c, the cost of transacting the asset, goes to zero. PMID:25288815
NASA Technical Reports Server (NTRS)
Peslen, C. A.; Koch, S. E.; Uccellini, L. W.
1984-01-01
Satellite-derived cloud motion 'wind' vectors (CMV) are increasingly used in mesoscale and in global analyses, and questions have been raised regarding the uncertainty of the level assignment for the CMV. One of two major problems in selecting a level for the CMV is related to uncertainties in assigning the motion vector to either the cloud top or base. The second problem is related to the inability to transfer the 'wind' derived from the CMV at individually specified heights to a standard coordinated surface. The present investigation has the objective to determine if the arbitrary level assignment represents a serious obstacle to the use of cloud motion wind vectors in the mesoscale analysis of a severe storm environment.
NASA Technical Reports Server (NTRS)
Samba, A. S.
1985-01-01
The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.
Parham, Paul E; Waldock, Joanna; Christophides, George K; Hemming, Deborah; Agusto, Folashade; Evans, Katherine J; Fefferman, Nina; Gaff, Holly; Gumel, Abba; LaDeau, Shannon; Lenhart, Suzanne; Mickens, Ronald E; Naumova, Elena N; Ostfeld, Richard S; Ready, Paul D; Thomas, Matthew B; Velasco-Hernandez, Jorge; Michael, Edwin
2015-04-05
Arguably one of the most important effects of climate change is the potential impact on human health. While this is likely to take many forms, the implications for future transmission of vector-borne diseases (VBDs), given their ongoing contribution to global disease burden, are both extremely important and highly uncertain. In part, this is owing not only to data limitations and methodological challenges when integrating climate-driven VBD models and climate change projections, but also, perhaps most crucially, to the multitude of epidemiological, ecological and socio-economic factors that drive VBD transmission, and this complexity has generated considerable debate over the past 10-15 years. In this review, we seek to elucidate current knowledge around this topic, identify key themes and uncertainties, evaluate ongoing challenges and open research questions and, crucially, offer some solutions for the field. Although many of these challenges are ubiquitous across multiple VBDs, more specific issues also arise in different vector-pathogen systems. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Detection of ferromagnetic target based on mobile magnetic gradient tensor system
NASA Astrophysics Data System (ADS)
Gang, Y. I. N.; Yingtang, Zhang; Zhining, Li; Hongbo, Fan; Guoquan, Ren
2016-03-01
Attitude change of mobile magnetic gradient tensor system critically affects the precision of gradient measurements, thereby increasing ambiguity in target detection. This paper presents a rotational invariant-based method for locating and identifying ferromagnetic targets. Firstly, unit magnetic moment vector was derived based on the geometrical invariant, such that the intermediate eigenvector of the magnetic gradient tensor is perpendicular to the magnetic moment vector and the source-sensor displacement vector. Secondly, unit source-sensor displacement vector was derived based on the characteristic that the angle between magnetic moment vector and source-sensor displacement is a rotational invariant. By introducing a displacement vector between two measurement points, the magnetic moment vector and the source-sensor displacement vector were theoretically derived. To resolve the problem of measurement noises existing in the realistic detection applications, linear equations were formulated using invariants corresponding to several distinct measurement points and least square solution of magnetic moment vector and source-sensor displacement vector were obtained. Results of simulation and principal verification experiment showed the correctness of the analytical method, along with the practicability of the least square method.
Tightening Quantum Speed Limits for Almost All States.
Campaioli, Francesco; Pollock, Felix A; Binder, Felix C; Modi, Kavan
2018-02-09
Conventional quantum speed limits perform poorly for mixed quantum states: They are generally not tight and often significantly underestimate the fastest possible evolution speed. To remedy this, for unitary driving, we derive two quantum speed limits that outperform the traditional bounds for almost all quantum states. Moreover, our bounds are significantly simpler to compute as well as experimentally more accessible. Our bounds have a clear geometric interpretation; they arise from the evaluation of the angle between generalized Bloch vectors.
Isoscalar-isovector mass splittings in excited mesons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geiger, P.
1994-06-01
Mass splittings between the isovector and isoscalar members of meson nonets arise in part from hadronic loop diagrams which violate the Okubo-Zweig-Iizuka rule. Using a model for these loop processes which works qualitatively well in the established nonets, I tabulate predictions for the splittings and associated isoscalar mixing angles in the remaining nonets below about 2 GeV, and explain some of their systematic features. The model predicts significant deviations from ideal mixing in the excited vector nonets.
Detailing the equivalence between real equiangular tight frames and certain strongly regular graphs
NASA Astrophysics Data System (ADS)
Fickus, Matthew; Watson, Cody E.
2015-08-01
An equiangular tight frame (ETF) is a set of unit vectors whose coherence achieves the Welch bound, and so is as incoherent as possible. They arise in numerous applications. It is well known that real ETFs are equivalent to a certain subclass of strongly regular graphs. In this note, we give some alternative techniques for understanding this equivalence. In a later document, we will use these techniques to further generalize this theory.
The New Field Quantities and the Poynting Theorem in Material Medium with Magnetic Monopoles
NASA Astrophysics Data System (ADS)
Zor, Ömer
2016-12-01
The duality transformation was used to define the polarization mechanisms that arise from magnetic monopoles. Then, a dimensional analysis was conducted to describe the displacement and magnetic intensity vectors (constitutive equations) in SI units. Finally, symmetric Maxwell equations in a material medium with new field quantities were introduced. Hence, the Lorentz force and the Poynting theorem were defined with these new field quantities, and many possible definitions of them were constructed.
Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; ...
2015-07-14
In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less
Hybrid state vector methods for structural dynamic and aeroelastic boundary value problems
NASA Technical Reports Server (NTRS)
Lehman, L. L.
1982-01-01
A computational technique is developed that is suitable for performing preliminary design aeroelastic and structural dynamic analyses of large aspect ratio lifting surfaces. The method proves to be quite general and can be adapted to solving various two point boundary value problems. The solution method, which is applicable to both fixed and rotating wing configurations, is based upon a formulation of the structural equilibrium equations in terms of a hybrid state vector containing generalized force and displacement variables. A mixed variational formulation is presented that conveniently yields a useful form for these state vector differential equations. Solutions to these equations are obtained by employing an integrating matrix method. The application of an integrating matrix provides a discretization of the differential equations that only requires solutions of standard linear matrix systems. It is demonstrated that matrix partitioning can be used to reduce the order of the required solutions. Results are presented for several example problems in structural dynamics and aeroelasticity to verify the technique and to demonstrate its use. These problems examine various types of loading and boundary conditions and include aeroelastic analyses of lifting surfaces constructed from anisotropic composite materials.
Behaviour of mathematics and physics students in solving problem of Vector-Physics context
NASA Astrophysics Data System (ADS)
Sardi; Rizal, M.; Mansyur, J.
2018-04-01
This research aimed to describe behaviors of mathematics and physics students in solving problem of the vector concept in physics context. The subjects of the research were students who enrolled in Mathematics Education Study Program and Physics Education Study Program of FKIP Universitas Tadulako. The selected participants were students who received the highest score in vector fundamental concept test in each study program. The data were collected through thinking-aloud activity followed by an interview. The steps of data analysis included data reduction, display, and conclusion drawing. The credibility of the data was tested using a triangulation method. Based on the data analysis, it can be concluded that the two groups of students did not show fundamental differences in problem-solving behavior, especially in the steps of understanding the problem (identifying, collecting and analyzing facts and information), planning (looking for alternative strategies) and conducting the alternative strategy. The two groups were differ only in the evaluation aspect. In contrast to Physics students who evaluated their answer, mathematics students did not conducted an evaluation activity on their work. However, the difference was not caused by the differences in background knowledge.
Efficient Optimization of Low-Thrust Spacecraft Trajectories
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Fink, Wolfgang; Russell, Ryan; Terrile, Richard; Petropoulos, Anastassios; vonAllmen, Paul
2007-01-01
A paper describes a computationally efficient method of optimizing trajectories of spacecraft driven by propulsion systems that generate low thrusts and, hence, must be operated for long times. A common goal in trajectory-optimization problems is to find minimum-time, minimum-fuel, or Pareto-optimal trajectories (here, Pareto-optimality signifies that no other solutions are superior with respect to both flight time and fuel consumption). The present method utilizes genetic and simulated-annealing algorithms to search for globally Pareto-optimal solutions. These algorithms are implemented in parallel form to reduce computation time. These algorithms are coupled with either of two traditional trajectory- design approaches called "direct" and "indirect." In the direct approach, thrust control is discretized in either arc time or arc length, and the resulting discrete thrust vectors are optimized. The indirect approach involves the primer-vector theory (introduced in 1963), in which the thrust control problem is transformed into a co-state control problem and the initial values of the co-state vector are optimized. In application to two example orbit-transfer problems, this method was found to generate solutions comparable to those of other state-of-the-art trajectory-optimization methods while requiring much less computation time.
A Summary of Some Discrete-Event System Control Problems
NASA Astrophysics Data System (ADS)
Rudie, Karen
A summary of the area of control of discrete-event systems is given. In this research area, automata and formal language theory is used as a tool to model physical problems that arise in technological and industrial systems. The key ingredients to discrete-event control problems are a process that can be modeled by an automaton, events in that process that cannot be disabled or prevented from occurring, and a controlling agent that manipulates the events that can be disabled to guarantee that the process under control either generates all the strings in some prescribed language or as many strings as possible in some prescribed language. When multiple controlling agents act on a process, decentralized control problems arise. In decentralized discrete-event systems, it is presumed that the agents effecting control cannot each see all event occurrences. Partial observation leads to some problems that cannot be solved in polynomial time and some others that are not even decidable.
The Visual Effects of Intraocular Colored Filters
Hammond, Billy R.
2012-01-01
Modern life is associated with a myriad of visual problems, most notably refractive conditions such as myopia. Human ingenuity has addressed such problems using strategies such as spectacle lenses or surgical correction. There are other visual problems, however, that have been present throughout our evolutionary history and are not as easily solved by simply correcting refractive error. These problems include issues like glare disability and discomfort arising from intraocular scatter, photostress with the associated transient loss in vision that arises from short intense light exposures, or the ability to see objects in the distance through a veil of atmospheric haze. One likely biological solution to these more long-standing problems has been the use of colored intraocular filters. Many species, especially diurnal, incorporate chromophores from numerous sources (e.g., often plant pigments called carotenoids) into ocular tissues to improve visual performance outdoors. This review summarizes information on the utility of such filters focusing on chromatic filtering by humans. PMID:24278692
Brady, Oliver J; Godfray, H Charles J; Tatem, Andrew J; Gething, Peter W; Cohen, Justin M; McKenzie, F Ellis; Perkins, T Alex; Reiner, Robert C; Tusting, Lucy S; Sinka, Marianne E; Moyes, Catherine L; Eckhoff, Philip A; Scott, Thomas W; Lindsay, Steven W; Hay, Simon I; Smith, David L
2016-02-01
Major gains have been made in reducing malaria transmission in many parts of the world, principally by scaling-up coverage with long-lasting insecticidal nets and indoor residual spraying. Historically, choice of vector control intervention has been largely guided by a parameter sensitivity analysis of George Macdonald's theory of vectorial capacity that suggested prioritizing methods that kill adult mosquitoes. While this advice has been highly successful for transmission suppression, there is a need to revisit these arguments as policymakers in certain areas consider which combinations of interventions are required to eliminate malaria. Using analytical solutions to updated equations for vectorial capacity we build on previous work to show that, while adult killing methods can be highly effective under many circumstances, other vector control methods are frequently required to fill effective coverage gaps. These can arise due to pre-existing or developing mosquito physiological and behavioral refractoriness but also due to additive changes in the relative importance of different vector species for transmission. Furthermore, the optimal combination of interventions will depend on the operational constraints and costs associated with reaching high coverage levels with each intervention. Reaching specific policy goals, such as elimination, in defined contexts requires increasingly non-generic advice from modelling. Our results emphasize the importance of measuring baseline epidemiology, intervention coverage, vector ecology and program operational constraints in predicting expected outcomes with different combinations of interventions. © The Author 2016. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene.
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
Conjunct rotation: Codman's paradox revisited.
Wolf, Sebastian I; Fradet, Laetitia; Rettig, Oliver
2009-05-01
This contribution mathematically formalizes Codman's idea of conjunct rotation, a term he used in 1934 to describe a paradoxical phenomenon arising from a closed-loop arm movement. Real (axial) rotation is distinguished from conjunct rotation. For characterizing the latter, the idea of reference vector fields is developed to define the neutral axial position of the humerus for any given orientation of its long axis. This concept largely avoids typical coordinate singularities arising from decomposition of 3D joint motion and therefore can be used for postural (axial) assessment of the shoulder joint both clinically and in sports science in almost the complete accessible range of motion. The concept, even though algebraic rather complex, might help to get an easier and more intuitive understanding of axial rotation of the shoulder in complex movements present in daily life and in sports.
Fixed points, stable manifolds, weather regimes, and their predictability
Deremble, Bruno; D'Andrea, Fabio; Ghil, Michael
2009-10-27
In a simple, one-layer atmospheric model, we study the links between low-frequency variability and the model’s fixed points in phase space. The model dynamics is characterized by the coexistence of multiple ''weather regimes.'' To investigate the transitions from one regime to another, we focus on the identification of stable manifolds associated with fixed points. We show that these manifolds act as separatrices between regimes. We track each manifold by making use of two local predictability measures arising from the meteorological applications of nonlinear dynamics, namely, ''bred vectors'' and singular vectors. These results are then verified in the framework of ensemblemore » forecasts issued from clouds (ensembles) of initial states. The divergence of the trajectories allows us to establish the connections between zones of low predictability, the geometry of the stable manifolds, and transitions between regimes.« less
Comparative decision models for anticipating shortage of food grain production in India
NASA Astrophysics Data System (ADS)
Chattopadhyay, Manojit; Mitra, Subrata Kumar
2018-01-01
This paper attempts to predict food shortages in advance from the analysis of rainfall during the monsoon months along with other inputs used for crop production, such as land used for cereal production, percentage of area covered under irrigation and fertiliser use. We used six binary classification data mining models viz., logistic regression, Multilayer Perceptron, kernel lab-Support Vector Machines, linear discriminant analysis, quadratic discriminant analysis and k-Nearest Neighbors Network, and found that linear discriminant analysis and kernel lab-Support Vector Machines are equally suitable for predicting per capita food shortage with 89.69 % accuracy in overall prediction and 92.06 % accuracy in predicting food shortage ( true negative rate). Advance information of food shortage can help policy makers to take remedial measures in order to prevent devastating consequences arising out of food non-availability.
Fixed points, stable manifolds, weather regimes, and their predictability.
Deremble, Bruno; D'Andrea, Fabio; Ghil, Michael
2009-12-01
In a simple, one-layer atmospheric model, we study the links between low-frequency variability and the model's fixed points in phase space. The model dynamics is characterized by the coexistence of multiple "weather regimes." To investigate the transitions from one regime to another, we focus on the identification of stable manifolds associated with fixed points. We show that these manifolds act as separatrices between regimes. We track each manifold by making use of two local predictability measures arising from the meteorological applications of nonlinear dynamics, namely, "bred vectors" and singular vectors. These results are then verified in the framework of ensemble forecasts issued from "clouds" (ensembles) of initial states. The divergence of the trajectories allows us to establish the connections between zones of low predictability, the geometry of the stable manifolds, and transitions between regimes.
Limits on new forces coexisting with electromagnetism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kloor, H.; Fischbach, E.; Talmadge, C.
1994-02-15
We consider the limits arising from different electromagnetic systems on the existence of a possible new electromagnetic analogue of the fifth force. Although such a force may have no intrinsic connection to electromagnetism (or gravity), its effects could be manifested through various anomalies in electromagnetic systems, for appropriate values of the coupling strength and range. Our work generalizes that of Bartlett and Loegl (who considered the case of a massive vector field coexisting with massless electrodynamics) to encompass a broad class of phenomenological interactions mediated by both scalar and vector exchanges. By combining data from both gravitational and electromagnetic systems,more » one can eventually set limits on a new force whose range [lambda] extends from the subatomic scale ([lambda][approx]10[sup [minus]15] m) to the astrophysical scale ([lambda][approx]10[sup 12] m).« less
New discretization and solution techniques for incompressible viscous flow problems
NASA Technical Reports Server (NTRS)
Gunzburger, M. D.; Nicolaides, R. A.; Liu, C. H.
1983-01-01
Several topics arising in the finite element solution of the incompressible Navier-Stokes equations are considered. Specifically, the question of choosing finite element velocity/pressure spaces is addressed, particularly from the viewpoint of achieving stable discretizations leading to convergent pressure approximations. The role of artificial viscosity in viscous flow calculations is studied, emphasizing work by several researchers for the anisotropic case. The last section treats the problem of solving the nonlinear systems of equations which arise from the discretization. Time marching methods and classical iterative techniques, as well as some modifications are mentioned.
1980-10-01
faster than previous algorithms. Indeed, with only minor modifications, the standard multigrid programs solve the LCP with essentially the same efficiency... Lemna 2.2. Let Uk be the solution of the LCP (2.3), and let uk > 0 be an approximate solu- tion obtained after one or more Gk projected sweeps. Let...in Figure 3.2, Ivu IIG decreased from .293 10 to .110 10 with the expenditure of (99.039-94.400) = 4.639 work units. While minor variations do arise, a
Vector control in developed countries
Peters, Richard F.
1963-01-01
The recent rapid growth of California's population, leading to competition for space between residential, industrial and agricultural interests, the development of its water resources and increasing water pollution provide the basic ingredients of its present vector problems. Within the past half-century, the original mosquito habitats provided by nature have gradually given place to even more numerous and productive habitats of man-made character. At the same time, emphasis in mosquito control has shifted from physical to chemical, with the more recent extension to biological approaches as well. The growing domestic fly problem, continuing despite the virtual disappearance of the horse, is attributable to an increasing amount of organic by-products, stemming from growing communities, expanding industries and changing agriculture. The programme for the control of disease vectors and pest insects and animals directs its major effort to the following broad areas: (1) water management (including land preparation), (2) solid organic wastes management (emphasizing utilization), (3) community management (including design, layout, and storage practices of buildings and grounds), and (4) recreational area management (related to wildlife management). It is apparent that vector control can often employ economics as an ally in securing its objectives. Effective organization of the environment to produce maximum economic benefits to industry, agriculture, and the community results generally in conditions unfavourable to the survival of vector and noxious animal species. Hence, vector prevention or suppression is preferable to control as a programme objective. PMID:20604166
Fernandez-Lozano, C.; Canto, C.; Gestal, M.; Andrade-Garda, J. M.; Rabuñal, J. R.; Dorado, J.; Pazos, A.
2013-01-01
Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected. PMID:24453933
Application of satellite data in variational analysis for global cyclonic systems
NASA Technical Reports Server (NTRS)
Achtemeier, G. L.
1987-01-01
The research goal was a variational data assimilation method that incorporates as dynamical constraints, the primitive equations for a moist, convectively unstable atmosphere and the radiative transfer equation. Variables to be adjusted include the three-dimensional vector wind, height, temperature, and moisture from rawinsonde data, and cloud-wind vectors, moisture, and radiance from satellite data. This presents a formidable mathematical problem. In order to facilitate thorough analysis of each of the model components, four variational models that divide the problem naturally according to increasing complexity are defined. Each model is summarized.
NASA Astrophysics Data System (ADS)
Kammerdiner, Alla; Xanthopoulos, Petros; Pardalos, Panos M.
2007-11-01
In this chapter a potential problem with application of the Granger-causality based on the simple vector autoregressive (VAR) modeling to EEG data is investigated. Although some initial studies tested whether the data support the stationarity assumption of VAR, the stability of the estimated model is rarely (if ever) been verified. In fact, in cases when the stability condition is violated the process may exhibit a random walk like behavior or even be explosive. The problem is illustrated by an example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasso, C.H.; Gallardo, M.
2006-01-15
The conclusions extracted from a recent study of the excitation of giant dipole resonances in nuclei at relativistic bombarding energies open the way for a further simplification of the problem. It consists in the elimination of the relativistic scalar and vector electromagnetic potentials and the familiar numerical difficulties associated with their presence in the calculation scheme. The inherent advantage of a reformulation of the problem of relativistic Coulomb excitation of giant dipole resonances along these lines is discussed.
Image segmentation using fuzzy LVQ clustering networks
NASA Technical Reports Server (NTRS)
Tsao, Eric Chen-Kuo; Bezdek, James C.; Pal, Nikhil R.
1992-01-01
In this note we formulate image segmentation as a clustering problem. Feature vectors extracted from a raw image are clustered into subregions, thereby segmenting the image. A fuzzy generalization of a Kohonen learning vector quantization (LVQ) which integrates the Fuzzy c-Means (FCM) model with the learning rate and updating strategies of the LVQ is used for this task. This network, which segments images in an unsupervised manner, is thus related to the FCM optimization problem. Numerical examples on photographic and magnetic resonance images are given to illustrate this approach to image segmentation.
Monte Carlo simulation of a dynamical fermion problem: The light q sup 2 q sup 2 system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grondin, G.
1991-01-01
We present results from a Guided Random Walk Monte Carlo simulation of the light q{sup 2}{bar q}{sup 2} system in a Coulomb-plus-linear quark potential model using an Intel iPSC/860 hypercube. A solvable model problem is first considered, after which we study the full q{sup 2}{bar q}{sup 2} system in (J,I) = (2,2) and (2,0) sectors. We find evidence for no bound states below the vector-vector threshold in these systems. 17 refs., 6 figs.
NASA Astrophysics Data System (ADS)
Rumyantseva, O. D.; Shurup, A. S.
2017-01-01
The paper considers the derivation of the wave equation and Helmholtz equation for solving the tomographic problem of reconstruction combined scalar-vector inhomogeneities describing perturbations of the sound velocity and absorption, the vector field of flows, and perturbations of the density of the medium. Restrictive conditions under which the obtained equations are meaningful are analyzed. Results of numerical simulation of the two-dimensional functional-analytical Novikov-Agaltsov algorithm for reconstructing the flow velocity using the the obtained Helmholtz equation are presented.
Analytical-numerical solution of a nonlinear integrodifferential equation in econometrics
NASA Astrophysics Data System (ADS)
Kakhktsyan, V. M.; Khachatryan, A. Kh.
2013-07-01
A mixed problem for a nonlinear integrodifferential equation arising in econometrics is considered. An analytical-numerical method is proposed for solving the problem. Some numerical results are presented.
Representation of magnetic fields in space
NASA Technical Reports Server (NTRS)
Stern, D. P.
1975-01-01
Several methods by which a magnetic field in space can be represented are reviewed with particular attention to problems of the observed geomagnetic field. Time dependence is assumed to be negligible, and five main classes of representation are described by vector potential, scalar potential, orthogonal vectors, Euler potentials, and expanded magnetic field.
Foliar nutrient analysis of sugar maple decline: retrospective vector diagnosis
Victor R. Timmer; Yuanxin Teng
1999-01-01
Accuracy of traditional foiiar analysis of nutrient disorders in sugar maple (Acer saccharum Marsh) is limited by lack of validation and confounding by nutrient interactions. Vector nutrient diagnosis is relatively free of these problems. The technique is demonstrated retrospectively on four case studies. Diagnostic interpretations consistently...
USDA-ARS?s Scientific Manuscript database
Phlebotomine sand flies are small hematophagous vectors of human and zoonotic leishmaniases present throughout tropical and subtropical areas of the world. Phlebotomus papatasi is a principal vector of human cutaneous leishmaniasis that has presented serious problems for military operations and resi...
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Xia, Wenjun; Mita, Yoshio; Shibata, Tadashi
2016-05-01
Aiming at efficient data condensation and improving accuracy, this paper presents a hardware-friendly template reduction (TR) method for the nearest neighbor (NN) classifiers by introducing the concept of critical boundary vectors. A hardware system is also implemented to demonstrate the feasibility of using an field-programmable gate array (FPGA) to accelerate the proposed method. Initially, k -means centers are used as substitutes for the entire template set. Then, to enhance the classification performance, critical boundary vectors are selected by a novel learning algorithm, which is completed within a single iteration. Moreover, to remove noisy boundary vectors that can mislead the classification in a generalized manner, a global categorization scheme has been explored and applied to the algorithm. The global characterization automatically categorizes each classification problem and rapidly selects the boundary vectors according to the nature of the problem. Finally, only critical boundary vectors and k -means centers are used as the new template set for classification. Experimental results for 24 data sets show that the proposed algorithm can effectively reduce the number of template vectors for classification with a high learning speed. At the same time, it improves the accuracy by an average of 2.17% compared with the traditional NN classifiers and also shows greater accuracy than seven other TR methods. We have shown the feasibility of using a proof-of-concept FPGA system of 256 64-D vectors to accelerate the proposed method on hardware. At a 50-MHz clock frequency, the proposed system achieves a 3.86 times higher learning speed than on a 3.4-GHz PC, while consuming only 1% of the power of that used by the PC.
ERIC Educational Resources Information Center
McGraw, Rebecca; Patterson, Cody L.
2017-01-01
In this study, we examine how inservice secondary mathematics teachers working together on a contextualized problem negotiate issues arising from the ill-structured nature of the problem such as what assumptions one may make, what real-world considerations should be taken into account, and what constitutes a satisfactory solution. We conceptualize…
NASA Astrophysics Data System (ADS)
Assi, Kondo Claude; Gay, Etienne; Chnafa, Christophe; Mendez, Simon; Nicoud, Franck; Abascal, Juan F. P. J.; Lantelme, Pierre; Tournoux, François; Garcia, Damien
2017-09-01
We propose a regularized least-squares method for reconstructing 2D velocity vector fields within the left ventricular cavity from single-view color Doppler echocardiographic images. Vector flow mapping is formulated as a quadratic optimization problem based on an {{\\ell }2} -norm minimization of a cost function composed of a Doppler data-fidelity term and a regularizer. The latter contains three physically interpretable expressions related to 2D mass conservation, Dirichlet boundary conditions, and smoothness. A finite difference discretization of the continuous problem was adopted in a polar coordinate system, leading to a sparse symmetric positive-definite system. The three regularization parameters were determined automatically by analyzing the L-hypersurface, a generalization of the L-curve. The performance of the proposed method was numerically evaluated using (1) a synthetic flow composed of a mixture of divergence-free and curl-free flow fields and (2) simulated flow data from a patient-specific CFD (computational fluid dynamics) model of a human left heart. The numerical evaluations showed that the vector flow fields reconstructed from the Doppler components were in good agreement with the original velocities, with a relative error less than 20%. It was also demonstrated that a perturbation of the domain contour has little effect on the rebuilt velocity fields. The capability of our intraventricular vector flow mapping (iVFM) algorithm was finally illustrated on in vivo echocardiographic color Doppler data acquired in patients. The vortex that forms during the rapid filling was clearly deciphered. This improved iVFM algorithm is expected to have a significant clinical impact in the assessment of diastolic function.
New perspectives in tracing vector-borne interaction networks.
Gómez-Díaz, Elena; Figuerola, Jordi
2010-10-01
Disentangling trophic interaction networks in vector-borne systems has important implications in epidemiological and evolutionary studies. Molecular methods based on bloodmeal typing in vectors have been increasingly used to identify hosts. Although most molecular approaches benefit from good specificity and sensitivity, their temporal resolution is limited by the often rapid digestion of blood, and mixed bloodmeals still remain a challenge for bloodmeal identification in multi-host vector systems. Stable isotope analyses represent a novel complementary tool that can overcome some of these problems. The utility of these methods using examples from different vector-borne systems are discussed and the extents to which they are complementary and versatile are highlighted. There are excellent opportunities for progress in the study of vector-borne transmission networks resulting from the integration of both molecular and stable isotope approaches. Copyright © 2010 Elsevier Ltd. All rights reserved.
Decision Engines for Software Analysis Using Satisfiability Modulo Theories Solvers
NASA Technical Reports Server (NTRS)
Bjorner, Nikolaj
2010-01-01
The area of software analysis, testing and verification is now undergoing a revolution thanks to the use of automated and scalable support for logical methods. A well-recognized premise is that at the core of software analysis engines is invariably a component using logical formulas for describing states and transformations between system states. The process of using this information for discovering and checking program properties (including such important properties as safety and security) amounts to automatic theorem proving. In particular, theorem provers that directly support common software constructs offer a compelling basis. Such provers are commonly called satisfiability modulo theories (SMT) solvers. Z3 is a state-of-the-art SMT solver. It is developed at Microsoft Research. It can be used to check the satisfiability of logical formulas over one or more theories such as arithmetic, bit-vectors, lists, records and arrays. The talk describes some of the technology behind modern SMT solvers, including the solver Z3. Z3 is currently mainly targeted at solving problems that arise in software analysis and verification. It has been applied to various contexts, such as systems for dynamic symbolic simulation (Pex, SAGE, Vigilante), for program verification and extended static checking (Spec#/Boggie, VCC, HAVOC), for software model checking (Yogi, SLAM), model-based design (FORMULA), security protocol code (F7), program run-time analysis and invariant generation (VS3). We will describe how it integrates support for a variety of theories that arise naturally in the context of the applications. There are several new promising avenues and the talk will touch on some of these and the challenges related to SMT solvers. Proceedings
Video streaming with SHVC to HEVC transcoding
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; He, Yuwen; Ye, Yan; Xiu, Xiaoyu
2015-09-01
This paper proposes an efficient Scalable High efficiency Video Coding (SHVC) to High Efficiency Video Coding (HEVC) transcoder, which can reduce the transcoding complexity significantly, and provide a desired trade-off between the transcoding complexity and the transcoded video quality. To reduce the transcoding complexity, some of coding information, such as coding unit (CU) depth, prediction mode, merge mode, motion vector information, intra direction information and transform unit (TU) depth information, in the SHVC bitstream are mapped and transcoded to single layer HEVC bitstream. One major difficulty in transcoding arises when trying to reuse the motion information from SHVC bitstream since motion vectors referring to inter-layer reference (ILR) pictures cannot be reused directly in transcoding. Reusing motion information obtained from ILR pictures for those prediction units (PUs) will reduce the complexity of the SHVC transcoder greatly but a significant reduction in the quality of the picture is observed. Pictures corresponding to the intra refresh pictures in the base layer (BL) will be coded as P pictures in enhancement layer (EL) in the SHVC bitstream; and directly reusing the intra information from the BL for transcoding will not get a good coding efficiency. To solve these problems, various transcoding technologies are proposed. The proposed technologies offer different trade-offs between transcoding speed and transcoding quality. They are implemented on the basis of reference software SHM-6.0 and HM-14.0 for the two layer spatial scalability configuration. Simulations show that the proposed SHVC software transcoder reduces the transcoding complexity by up to 98-99% using low complexity transcoding mode when compared with cascaded re-encoding method. The transcoder performance at various bitrates with different transcoding modes are compared in terms of transcoding speed and transcoded video quality.
NASA Astrophysics Data System (ADS)
Loran, Farhang; Mostafazadeh, Ali
2017-12-01
We provide an exact solution of the scattering problem for the potentials of the form v (x ,y ) =χa(x ) [v0(x ) +v1(x ) ei α y] , where χa(x ) :=1 for x ∈[0 ,a ] , χa(x ) :=0 for x ∉[0 ,a ] , vj(x ) are real or complex-valued functions, χa(x ) v0(x ) is an exactly solvable scattering potential in one dimension, and α is a positive real parameter. If α exceeds the wave number k of the incident wave, the scattered wave does not depend on the choice of v1(x ) . In particular, v (x ,y ) is invisible if v0(x ) =0 and k <α . For k >α and v1(x ) ≠0 , the scattered wave consists of a finite number of coherent plane-wave pairs ψn± with wave vector: kn=(±√{k2-[nα ] 2 },n α ) , where n =0 ,1 ,2 ,...
The Development of the Differential MEMS Vector Hydrophone
Zhang, Guojun; Liu, Mengran; Shen, Nixin; Wang, Xubo; Zhang, Wendong
2017-01-01
To solve the problem that MEMS vector hydrophones are greatly interfered with by the vibration of the platform and flow noise in applications, this paper describes a differential MEMS vector hydrophone that could simultaneously receive acoustic signals and reject acceleration signals. Theoretical and simulation analyses have been carried out. Lastly, a prototype of the differential MEMS vector hydrophone has been created and tested using a standing wave tube and a vibration platform. The results of the test show that this hydrophone has a high sensitivity, Mv = −185 dB (@ 500 Hz, 0 dB reference 1 V/μPa), which is almost the same as the previous MEMS vector hydrophones, and has a low acceleration sensitivity, Mv = −58 dB (0 dB reference 1 V/g), which has decreased by 17 dB compared with the previous MEMS vector hydrophone. The differential MEMS vector hydrophone basically meets the requirements of acoustic vector detection when it is rigidly fixed to a working platform, which lays the foundation for engineering applications of MEMS vector hydrophones. PMID:28594384
From the Golden Rectangle and Fibonacci to Pedagogy and Problem Posing
ERIC Educational Resources Information Center
Brown, Stephen I.
1976-01-01
Beginning with an analysis of the golden rectangle, the author shows how a series of problems for student investigation arise from queries concerning changes in conditions and analogous situations. (SD)
Problems in Recording the Electrocardiogram.
ERIC Educational Resources Information Center
Webster, John G.
The unwanted signals that arise in electrocardiography are discussed. A technical background of electrocardiography is given, along with teaching techniques that educate students of medical instrumentation to solve the problems caused by these signals. (MJH)
An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations
Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.
2016-01-01
We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360
The influence of delivery vectors on HIV vaccine efficacy
Ondondo, Beatrice O.
2014-01-01
Development of an effective HIV/AIDS vaccine remains a big challenge, largely due to the enormous HIV diversity which propels immune escape. Thus novel vaccine strategies are targeting multiple variants of conserved antibody and T cell epitopic regions which would incur a huge fitness cost to the virus in the event of mutational escape. Besides immunogen design, the delivery modality is critical for vaccine potency and efficacy, and should be carefully selected in order to not only maximize transgene expression, but to also enhance the immuno-stimulatory potential to activate innate and adaptive immune systems. To date, five HIV vaccine candidates have been evaluated for efficacy and protection from acquisition was only achieved in a small proportion of vaccinees in the RV144 study which used a canarypox vector for delivery. Conversely, in the STEP study (HVTN 502) where human adenovirus serotype 5 (Ad5) was used, strong immune responses were induced but vaccination was more associated with increased risk of HIV acquisition than protection in vaccinees with pre-existing Ad5 immunity. The possibility that pre-existing immunity to a highly promising delivery vector may alter the natural course of HIV to increase acquisition risk is quite worrisome and a huge setback for HIV vaccine development. Thus, HIV vaccine development efforts are now geared toward delivery platforms which attain superior immunogenicity while concurrently limiting potential catastrophic effects likely to arise from pre-existing immunity or vector-related immuno-modulation. However, it still remains unclear whether it is poor immunogenicity of HIV antigens or substandard immunological potency of the safer delivery vectors that has limited the success of HIV vaccines. This article discusses some of the promising delivery vectors to be harnessed for improved HIV vaccine efficacy. PMID:25202303
An analysis of random projection for changeable and privacy-preserving biometric verification.
Wang, Yongjin; Plataniotis, Konstantinos N
2010-10-01
Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Bastidas, L. A.; Pande, S.
2009-12-01
Pattern analysis deals with the automatic detection of patterns in the data and there are a variety of algorithms available for the purpose. These algorithms are commonly called Artificial Intelligence (AI) or data driven algorithms, and have been applied lately to a variety of problems in hydrology and are becoming extremely popular. When confronting such a range of algorithms, the question of which one is the “best” arises. Some algorithms may be preferred because of the lower computational complexity; others take into account prior knowledge of the form and the amount of the data; others are chosen based on a version of the Occam’s razor principle that a simple classifier performs better. Popper has argued, however, that Occam’s razor is without operational value because there is no clear measure or criterion for simplicity. An example of measures that can be used for this purpose are: the so called algorithmic complexity - also known as Kolmogorov complexity or Kolmogorov (algorithmic) entropy; the Bayesian information criterion; or the Vapnik-Chervonenkis dimension. On the other hand, the No Free Lunch Theorem states that there is no best general algorithm, and that specific algorithms are superior only for specific problems. It should be noted also that the appropriate algorithm and the appropriate complexity are constrained by the finiteness of the available data and the uncertainties associated with it. Thus, there is compromise between the complexity of the algorithm, the data properties, and the robustness of the predictions. We discuss the above topics; briefly review the historical development of applications with particular emphasis on statistical learning theory (SLT), also known as machine learning (ML) of which support vector machines and relevant vector machines are the most commonly known algorithms. We present some applications of such algorithms for distributed hydrologic modeling; and introduce an example of how the complexity measure can be applied for appropriate model choice within the context of applications in hydrologic modeling intended for use in studies about water resources and water resources management and their direct relation to extreme conditions or natural hazards.
Melendez, Jaime; Sánchez, Clara I; van Ginneken, Bram; Karssemeijer, Nico
2014-08-01
Mass candidate detection is a crucial component of multistep computer-aided detection (CAD) systems. It is usually performed by combining several local features by means of a classifier. When these features are processed on a per-image-location basis (e.g., for each pixel), mismatching problems may arise while constructing feature vectors for classification, which is especially true when the behavior expected from the evaluated features is a peaked response due to the presence of a mass. In this study, two of these problems, consisting of maxima misalignment and differences of maxima spread, are identified and two solutions are proposed. The first proposed method, feature maxima propagation, reproduces feature maxima through their neighboring locations. The second method, local feature selection, combines different subsets of features for different feature vectors associated with image locations. Both methods are applied independently and together. The proposed methods are included in a mammogram-based CAD system intended for mass detection in screening. Experiments are carried out with a database of 382 digital cases. Sensitivity is assessed at two sets of operating points. The first one is the interval of 3.5-15 false positives per image (FPs/image), which is typical for mass candidate detection. The second one is 1 FP/image, which allows to estimate the quality of the mass candidate detector's output for use in subsequent steps of the CAD system. The best results are obtained when the proposed methods are applied together. In that case, the mean sensitivity in the interval of 3.5-15 FPs/image significantly increases from 0.926 to 0.958 (p < 0.0002). At the lower rate of 1 FP/image, the mean sensitivity improves from 0.628 to 0.734 (p < 0.0002). Given the improved detection performance, the authors believe that the strategies proposed in this paper can render mass candidate detection approaches based on image location classification more robust to feature discrepancies and prove advantageous not only at the candidate detection level, but also at subsequent steps of a CAD system.
Selection of optimal complexity for ENSO-EMR model by minimum description length principle
NASA Astrophysics Data System (ADS)
Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.
2012-12-01
One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression modeling of nonlinear processes: Derivation and applications to climatic variability. J. Climate, 18 (21): 4404-4424. D. Kondrashov, S. Kravtsov, A. W. Robertson and M. Ghil, 2005. A hierarchy of data-based ENSO models. J. Climate, 18, 4425-4444.
Very light dilaton and naturally light Higgs boson
NASA Astrophysics Data System (ADS)
Hong, Deog Ki
2018-02-01
We study very light dilaton, arising from a scale-invariant ultraviolet theory of the Higgs sector in the standard model of particle physics. Imposing the scale symmetry below the ultraviolet scale of the Higgs sector, we alleviate the fine-tuning problem associated with the Higgs mass. When the electroweak symmetry is spontaneously broken radiatively à la Coleman-Weinberg, the dilaton develops a vacuum expectation value away from the origin to give an extra contribution to the Higgs potential so that the Higgs mass becomes naturally around the electroweak scale. The ultraviolet scale of the Higgs sector can be therefore much higher than the electroweak scale, as the dilaton drives the Higgs mass to the electroweak scale. We also show that the light dilaton in this scenario is a good candidate for dark matter of mass m D ˜ 1 eV - 10 keV, if the ultraviolet scale is about 10-100 TeV. Finally we propose a dilaton-assisted composite Higgs model to realize our scenario. In addition to the light dilaton the model predicts a heavy U(1) axial vector boson and two massive, oppositely charged, pseudo Nambu-Goldstone bosons, which might be accessible at LHC.
Learning With Mixed Hard/Soft Pointwise Constraints.
Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello
2015-09-01
A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.
Dependence of surface tension on curvature obtained from a diffuse-interface approach
NASA Astrophysics Data System (ADS)
Badillo, Arnoldo; Lafferty, Nathan; Matar, Omar K.
2017-11-01
From a sharp-interface viewpoint, the surface tension force is f = σκδ (x -xi) n , where σ is the surface tension, κ the local interface curvature, δ the delta function, and n the unit normal vector. The numerical implementation of this force on discrete domains poses challenges that arise from the calculation of the curvature. The continuous surface tension force model, proposed by Brackbill et al. (1992), is an alternative, used commonly in two-phase computational models. In this model, δ is replaced by the gradient of a phase indicator field, whose integral across a diffuse-interface equals unity. An alternative to the Brackbill model are Phase-Field models, which do not require an explicit calculation of the curvature. However, and just as in Brackbill's approach, there are numerical errors that depend on the thickness of the diffuse interface, the grid spacing, and the curvature. We use differential geometry to calculate the leading errors in this force when obtained from a diffuse-interface approach, and outline possible routes to eliminate them. Our results also provide a simple geometrical explanation to the dependence of surface tension on curvature, and to the problem of line tension.
Uniform function constants of motion and their first-order perturbation
NASA Astrophysics Data System (ADS)
Prato, Domingo; Hamity, Victor H.
2005-05-01
The main purpose of this work is to present some uniform function constants of motion rather than the well-known quantities arising from spacetime symmetries. These constants are usually associated with the intrinsic characteristics of the trajectories of a particle in a central potential field. We treat two cases. The first is the Lenz vector which sometimes is found in the literature [1, 2]; the other is associated with the isotropic harmonic oscillator, of relative importance in some simple models of the classical molecular interaction. The first example is applied to describe the perturbation of the trajectories in the Rutherford scattering and the precession of the Keplerian orbit of a planet. In the other case the conserved quantity is a symmetric tensor. We find the eigenvectors and eigenvalues of that tensor while at the same time we obtain the solution to the problem of calculating the rotation rate of the orbits in first order of a perturbation parameter in the potential energy, by performing a simple coordinate transformation in the Cartesian plane. We think that the present work addresses many aspects of mechanics with a didactical interest in other physics or mathematics courses.
Baggaley, R.; van Praag, E.
2000-01-01
This paper examines the ethical, economic and social issues that should be considered when antiretroviral interventions are being planned to reduce mother-to-child transmission of the human immunodeficiency virus. Interventions aiming to reduce mother-to-child transmission should be concerned with the rights of both the child and the mother. Women should not be seen as vectors of transmission but as people entitled to adequate health care and social services in their own right. For women accepting mother-to-child transmission interventions it is important to consider their medical and emotional needs and to ensure that they are not stigmatized or subjected to abuse or abandonment following voluntary counselling and testing. Seropositive women who do not wish to continue with pregnancy should have access to facilities for safe termination if this is legal in the country concerned. Problems arise in relation to the basic requirements for introducing such interventions via the health services in developing countries. A framework is given for making decisions about implementation of interventions in health care systems with limited resources where there is a relatively high prevalence of human immunodeficiency virus infection among pregnant women. PMID:10994287
Adaptive near-field beamforming techniques for sound source imaging.
Cho, Yong Thung; Roan, Michael J
2009-02-01
Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.« less
Community detection using Kernel Spectral Clustering with memory
NASA Astrophysics Data System (ADS)
Langone, Rocco; Suykens, Johan A. K.
2013-02-01
This work is related to the problem of community detection in dynamic scenarios, which for instance arises in the segmentation of moving objects, clustering of telephone traffic data, time-series micro-array data etc. A desirable feature of a clustering model which has to capture the evolution of communities over time is the temporal smoothness between clusters in successive time-steps. In this way the model is able to track the long-term trend and in the same time it smooths out short-term variation due to noise. We use the Kernel Spectral Clustering with Memory effect (MKSC) which allows to predict cluster memberships of new nodes via out-of-sample extension and has a proper model selection scheme. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness as a valid prior knowledge. The latter, in fact, allows the model to cluster the current data well and to be consistent with the recent history. Here we propose a generalization of the MKSC model with an arbitrary memory, not only one time-step in the past. The experiments conducted on toy problems confirm our expectations: the more memory we add to the model, the smoother over time are the clustering results. We also compare with the Evolutionary Spectral Clustering (ESC) algorithm which is a state-of-the art method, and we obtain comparable or better results.
Direct Images, Fields of Hilbert Spaces, and Geometric Quantization
NASA Astrophysics Data System (ADS)
Lempert, László; Szőke, Róbert
2014-04-01
Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.
NASA Astrophysics Data System (ADS)
Klein, Ole; Cirpka, Olaf A.; Bastian, Peter; Ippisch, Olaf
2017-04-01
In the geostatistical inverse problem of subsurface hydrology, continuous hydraulic parameter fields, in most cases hydraulic conductivity, are estimated from measurements of dependent variables, such as hydraulic heads, under the assumption that the parameter fields are autocorrelated random space functions. Upon discretization, the continuous fields become large parameter vectors with O (104 -107) elements. While cokriging-like inversion methods have been shown to be efficient for highly resolved parameter fields when the number of measurements is small, they require the calculation of the sensitivity of each measurement with respect to all parameters, which may become prohibitive with large sets of measured data such as those arising from transient groundwater flow. We present a Preconditioned Conjugate Gradient method for the geostatistical inverse problem, in which a single adjoint equation needs to be solved to obtain the gradient of the objective function. Using the autocovariance matrix of the parameters as preconditioning matrix, expensive multiplications with its inverse can be avoided, and the number of iterations is significantly reduced. We use a randomized spectral decomposition of the posterior covariance matrix of the parameters to perform a linearized uncertainty quantification of the parameter estimate. The feasibility of the method is tested by virtual examples of head observations in steady-state and transient groundwater flow. These synthetic tests demonstrate that transient data can reduce both parameter uncertainty and time spent conducting experiments, while the presented methods are able to handle the resulting large number of measurements.
Implicit, nonswitching, vector-oriented algorithm for steady transonic flow
NASA Technical Reports Server (NTRS)
Lottati, I.
1983-01-01
A rapid computation of a sequence of transonic flow solutions has to be performed in many areas of aerodynamic technology. The employment of low-cost vector array processors makes the conduction of such calculations economically feasible. However, for a full utilization of the new hardware, the developed algorithms must take advantage of the special characteristics of the vector array processor. The present investigation has the objective to develop an efficient algorithm for solving transonic flow problems governed by mixed partial differential equations on an array processor.
Exploiting Symmetry on Parallel Architectures.
NASA Astrophysics Data System (ADS)
Stiller, Lewis Benjamin
1995-01-01
This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.
Parallel block schemes for large scale least squares computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golub, G.H.; Plemmons, R.J.; Sameh, A.
1986-04-01
Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment ofmore » the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less
Intrinsic operators for the electromagnetic nuclear current
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Adam, Jr.; H. Arenhovel
1996-09-01
The intrinsic electromagnetic nuclear meson exchange charge and current operators arising from a separation of the center-of-mass motion are derived for a one-boson-exchange model for the nuclear interaction with scalar, pseudoscalar and vector meson exchange including leading order relativistic terms. Explicit expressions for the meson exchange operators corresponding to the different meson types are given in detail for a two-nucleon system. These intrinsic operators are to be evaluated between intrinsic wave functions in their center-of-mass frame.
Anti-gravity and galaxy rotation curves
NASA Astrophysics Data System (ADS)
Sanders, R. H.
1984-07-01
A modification of Newtonian gravitational attraction which arises in the context of modern attempts to unify gravity with the other forces in nature can produce rotation curves for spiral galaxies which are nearly flat from 10 to 100 kpc, bind clusters of galaxies, and close the universe with the density of baryonic matter consistent with primordial nucleosynthesis. This is possible if a very low mass vector boson carries an effective anti-gravity force which on scales smaller than that of galaxies almost balances the normal attractive gravity force.
Architecture of Columnar Nacre, and Implications for Its Formation Mechanism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metzler, Rebecca A.; Olabisi, Ronke M.; Coppersmith, Susan N.
2007-06-29
We analyze the structure of Haliotis rufescens nacre, or mother-of-pearl, using synchrotron spectromicroscopy and x-ray absorption near-edge structure spectroscopy. We observe imaging contrast between adjacent individual nacre tablets, arising because different tablets have different crystal orientations with respect to the radiation's polarization vector. Comparing previous data and our new data with models for columnar nacre growth, we find the data are most consistent with a model in which nacre tablets are nucleated by randomly distributed sites in the organic matrix layers.
NASA Astrophysics Data System (ADS)
Bulakhov, M. G.; Buyanov, Yu. I.; Yakubov, V. P.
1996-10-01
It has been shown that a full vector measurement of the total field allows one to uniquely distinguish the incident and reflected waves at each observation point without the use of a spatial difference based on an analysis of the polarization structure of the interference pattern which arises during reflection of electromagnetic waves from an intermedia boundary. We have investigated the stability of these procedures with respect to measurement noise by means of numerical modeling.
Infrared Observations with the 1.6 Meter New Solar Telescope in Big Bear: Origins of Space Weather
2015-05-21
with the NST came in the Summer of 2009, while the first observations corrected by adaptive optics (AO) came in the Summer of 2010 and first vector...magnetograms (VMGs) in the Summer of 2011. In 2012, a new generation of solar adaptive optics (AO) developed in Big Bear led to hitherto only...upon which the NST has yield key information. Our concentration of sunspots in the second year of funding arises because of the improved resolution
NASA Astrophysics Data System (ADS)
Reimer, Ashton S.; Cheviakov, Alexei F.
2013-03-01
A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.
Use of Picard and Newton iteration for solving nonlinear ground water flow equations
Mehl, S.
2006-01-01
This study examines the use of Picard and Newton iteration to solve the nonlinear, saturated ground water flow equation. Here, a simple three-node problem is used to demonstrate the convergence difficulties that can arise when solving the nonlinear, saturated ground water flow equation in both homogeneous and heterogeneous systems with and without nonlinear boundary conditions. For these cases, the characteristic types of convergence patterns are examined. Viewing these convergence patterns as orbits of an attractor in a dynamical system provides further insight. It is shown that the nonlinearity that arises from nonlinear head-dependent boundary conditions can cause more convergence difficulties than the nonlinearity that arises from flow in an unconfined aquifer. Furthermore, the effects of damping on both convergence and convergence rate are investigated. It is shown that no single strategy is effective for all problems and how understanding pitfalls and merits of several methods can be helpful in overcoming convergence difficulties. Results show that Picard iterations can be a simple and effective method for the solution of nonlinear, saturated ground water flow problems.
Institutional Resource Requirements, Management, and Accountability.
ERIC Educational Resources Information Center
Matlock, John; Humphries, Frederick S.
A detailed resource management study was conducted at Tennessee State University, and resource management problems at other higher education institutions were identified through the exchange of data and studies. Resource requirements and management problems unique to black institutions were examined, as were the problems that arise from regional…
Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems
NASA Technical Reports Server (NTRS)
Tam, C. K. W. (Editor); Hardin, J. C. (Editor)
1997-01-01
The proceedings of the Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems held at Florida State University are the subject of this report. For this workshop, problems arising in typical industrial applications of CAA were chosen. Comparisons between numerical solutions and exact solutions are presented where possible.
NASA Technical Reports Server (NTRS)
Raymond, C.; Hajj, G.
1994-01-01
We review the problem of separating components of the magnetic field arising from sources in the Earth's core and lithosphere, from those contributions arising external to the Earth, namely ionospheric and magnetospheric fields, in spacecraft measurements of the Earth's magnetic field.
NASA Technical Reports Server (NTRS)
Winget, J. M.; Hughes, T. J. R.
1985-01-01
The particular problems investigated in the present study arise from nonlinear transient heat conduction. One of two types of nonlinearities considered is related to a material temperature dependence which is frequently needed to accurately model behavior over the range of temperature of engineering interest. The second nonlinearity is introduced by radiation boundary conditions. The finite element equations arising from the solution of nonlinear transient heat conduction problems are formulated. The finite element matrix equations are temporally discretized, and a nonlinear iterative solution algorithm is proposed. Algorithms for solving the linear problem are discussed, taking into account the form of the matrix equations, Gaussian elimination, cost, and iterative techniques. Attention is also given to approximate factorization, implementational aspects, and numerical results.
An adaptive evolutionary multi-objective approach based on simulated annealing.
Li, H; Landa-Silva, D
2011-01-01
A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.
Exact recovery of sparse multiple measurement vectors by [Formula: see text]-minimization.
Wang, Changlong; Peng, Jigen
2018-01-01
The joint sparse recovery problem is a generalization of the single measurement vector problem widely studied in compressed sensing. It aims to recover a set of jointly sparse vectors, i.e., those that have nonzero entries concentrated at a common location. Meanwhile [Formula: see text]-minimization subject to matrixes is widely used in a large number of algorithms designed for this problem, i.e., [Formula: see text]-minimization [Formula: see text] Therefore the main contribution in this paper is two theoretical results about this technique. The first one is proving that in every multiple system of linear equations there exists a constant [Formula: see text] such that the original unique sparse solution also can be recovered from a minimization in [Formula: see text] quasi-norm subject to matrixes whenever [Formula: see text]. The other one is showing an analytic expression of such [Formula: see text]. Finally, we display the results of one example to confirm the validity of our conclusions, and we use some numerical experiments to show that we increase the efficiency of these algorithms designed for [Formula: see text]-minimization by using our results.
Advancing integrated tick management to mitigate burden of tick-borne diseases
USDA-ARS?s Scientific Manuscript database
More than half of the world’s population is at risk of exposure to vector-borne pathogens. Annually, more than 1 billion people are infected and more than 1 million die from vector-borne diseases, including those caused by pathogens transmitted by ticks. The problem with tick borne diseases (TBD) is...
A Lesson in Vectors "Plain" and Simple
ERIC Educational Resources Information Center
Bradshaw, David M.
2004-01-01
The United States Military Academy (USMA) has a four course core mathematics curriculum that is studied by all students. The third course is MA205, Calculus II; a multivariate calculus course filled with practical applications. During a Problem Solving Lab (PSL), students participated in a hands-on exercise with multiple vector operations,…
Accommodating electric propulsion on SMART-1
NASA Astrophysics Data System (ADS)
Kugelberg, Joakim; Bodin, Per; Persson, Staffan; Rathsman, Peter
2004-07-01
This paper focuses on the technical challenges that arise when electric propulsion is used on a small spacecraft such as SMART-1. The choice of electric propulsion influences not only the attitude control system and the power system, but also the thermal control as well as the spacecraft structure. A description is given on how the design of the attitude control system uses the possibility to control the alignment of the thrust vector in order to reduce the momentum build-up. An outline is made of the philosophy of power generation and distribution and shows how the thermal interfaces to highly dissipating units have been solved. Areas unique for electric propulsion are the added value of a thrust vector orientation mechanism and the special consideration given to the electromagnetic compatibility. SMART-1 is equipped with a thruster gimbal mechanism providing a 10° cone in which the thrust vector can be pointed. Concerning the electromagnetic compatibility, a discussion on how to evaluate the available test results is given keeping in mind that one of the main objectives of the SMART-1 mission is to assess the impact of electric propulsion on the scientific instruments and on other spacecraft systems. Finally, the assembly, integration and test of the spacecraft is described. Compared to traditional propulsion systems, electric propulsion puts different requirements on the integration sequence and limits the possibilities to verify the correct function of the thruster since it needs high quality vacuum in order to operate. Prime contractor for SMART-1 is the Swedish Space Corporation (SSC). The electric propulsion subsystem is procured directly by ESA from SNECMA, France and is delivered to SSC as a customer furnished item. The conclusion of this paper is that electric propulsion is possible on a small spacecraft, which opens up possibilities for a new range of missions for which a large velocity increment is needed. The paper will also present SMART-1 and show how the problems related to the accommodation of electric propulsion have been solved during design and planning of the project.
The primer vector in linear, relative-motion equations. [spacecraft trajectory optimization
NASA Technical Reports Server (NTRS)
1980-01-01
Primer vector theory is used in analyzing a set of linear, relative-motion equations - the Clohessy-Wiltshire equations - to determine the criteria and necessary conditions for an optimal, N-impulse trajectory. Since the state vector for these equations is defined in terms of a linear system of ordinary differential equations, all fundamental relations defining the solution of the state and costate equations, and the necessary conditions for optimality, can be expressed in terms of elementary functions. The analysis develops the analytical criteria for improving a solution by (1) moving any dependent or independent variable in the initial and/or final orbit, and (2) adding intermediate impulses. If these criteria are violated, the theory establishes a sufficient number of analytical equations. The subsequent satisfaction of these equations will result in the optimal position vectors and times of an N-impulse trajectory. The solution is examined for the specific boundary conditions of (1) fixed-end conditions, two-impulse, and time-open transfer; (2) an orbit-to-orbit transfer; and (3) a generalized rendezvous problem. A sequence of rendezvous problems is solved to illustrate the analysis and the computational procedure.
Optimal integer resolution for attitude determination using global positioning system signals
NASA Technical Reports Server (NTRS)
Crassidis, John L.; Markley, F. Landis; Lightsey, E. Glenn
1998-01-01
In this paper, a new motion-based algorithm for GPS integer ambiguity resolution is derived. The first step of this algorithm converts the reference sightline vectors into body frame vectors. This is accomplished by an optimal vectorized transformation of the phase difference measurements. The result of this transformation leads to the conversion of the integer ambiguities to vectorized biases. This essentially converts the problem to the familiar magnetometer-bias determination problem, for which an optimal and efficient solution exists. Also, the formulation in this paper is re-derived to provide a sequential estimate, so that a suitable stopping condition can be found during the vehicle motion. The advantages of the new algorithm include: it does not require an a-priori estimate of the vehicle's attitude; it provides an inherent integrity check using a covariance-type expression; and it can sequentially estimate the ambiguities during the vehicle motion. The only disadvantage of the new algorithm is that it requires at least three non-coplanar baselines. The performance of the new algorithm is tested on a dynamic hardware simulator.
Person Authentication Using Learned Parameters of Lifting Wavelet Filters
NASA Astrophysics Data System (ADS)
Niijima, Koichi
2006-10-01
This paper proposes a method for identifying persons by the use of the lifting wavelet parameters learned by kurtosis-minimization. Our learning method uses desirable properties of kurtosis and wavelet coefficients of a facial image. Exploiting these properties, the lifting parameters are trained so as to minimize the kurtosis of lifting wavelet coefficients computed for the facial image. Since this minimization problem is an ill-posed problem, it is solved by the aid of Tikhonov's regularization method. Our learning algorithm is applied to each of the faces to be identified to generate its feature vector whose components consist of the learned parameters. The constructed feature vectors are memorized together with the corresponding faces in a feature vectors database. Person authentication is performed by comparing the feature vector of a query face with those stored in the database. In numerical experiments, the lifting parameters are trained for each of the neutral faces of 132 persons (74 males and 58 females) in the AR face database. Person authentication is executed by using the smile and anger faces of the same persons in the database.
Polar decomposition for attitude determination from vector observations
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.
1993-01-01
This work treats the problem of weighted least squares fitting of a 3D Euclidean-coordinate transformation matrix to a set of unit vectors measured in the reference and transformed coordinates. A closed-form analytic solution to the problem is re-derived. The fact that the solution is the closest orthogonal matrix to some matrix defined on the measured vectors and their weights is clearly demonstrated. Several known algorithms for computing the analytic closed form solution are considered. An algorithm is discussed which is based on the polar decomposition of matrices into the closest unitary matrix to the decomposed matrix and a Hermitian matrix. A somewhat longer improved algorithm is suggested too. A comparison of several algorithms is carried out using simulated data as well as real data from the Upper Atmosphere Research Satellite. The comparison is based on accuracy and time consumption. It is concluded that the algorithms based on polar decomposition yield a simple although somewhat less accurate solution. The precision of the latter algorithms increase with the number of the measured vectors and with the accuracy of their measurement.
Scattering and bound states of spinless particles in a mixed vector-scalar smooth step potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, M.G.; Castro, A.S. de
2009-11-15
Scattering and bound states for a spinless particle in the background of a kink-like smooth step potential, added with a scalar uniform background, are considered with a general mixing of vector and scalar Lorentz structures. The problem is mapped into the Schroedinger-like equation with an effective Rosen-Morse potential. It is shown that the scalar uniform background present subtle and trick effects for the scattering states and reveals itself a high-handed element for formation of bound states. In that process, it is shown that the problem of solving a differential equation for the eigenenergies is transmuted into the simpler and moremore » efficient problem of solving an irrational algebraic equation.« less
Implementation of a new fuzzy vector control of induction motor.
Rafa, Souad; Larabi, Abdelkader; Barazane, Linda; Manceur, Malik; Essounbouli, Najib; Hamzaoui, Abdelaziz
2014-05-01
The aim of this paper is to present a new approach to control an induction motor using type-1 fuzzy logic. The induction motor has a nonlinear model, uncertain and strongly coupled. The vector control technique, which is based on the inverse model of the induction motors, solves the coupling problem. Unfortunately, in practice this is not checked because of model uncertainties. Indeed, the presence of the uncertainties led us to use human expertise such as the fuzzy logic techniques. In order to maintain the decoupling and to overcome the problem of the sensitivity to the parametric variations, the field-oriented control is replaced by a new block control. The simulation results show that the both control schemes provide in their basic configuration, comparable performances regarding the decoupling. However, the fuzzy vector control provides the insensitivity to the parametric variations compared to the classical one. The fuzzy vector control scheme is successfully implemented in real-time using a digital signal processor board dSPACE 1104. The efficiency of this technique is verified as well as experimentally at different dynamic operating conditions such as sudden loads change, parameter variations, speed changes, etc. The fuzzy vector control is found to be a best control for application in an induction motor. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Katz, Michael G; Fargnoli, Anthony S; Williams, Richard D; Bridges, Charles R
2013-11-01
Gene therapy is one of the most promising fields for developing new treatments for the advanced stages of ischemic and monogenetic, particularly autosomal or X-linked recessive, cardiomyopathies. The remarkable ongoing efforts in advancing various targets have largely been inspired by the results that have been achieved in several notable gene therapy trials, such as the hemophilia B and Leber's congenital amaurosis. Rate-limiting problems preventing successful clinical application in the cardiac disease area, however, are primarily attributable to inefficient gene transfer, host responses, and the lack of sustainable therapeutic transgene expression. It is arguable that these problems are directly correlated with the choice of vector, dose level, and associated cardiac delivery approach as a whole treatment system. Essentially, a delicate balance exists in maximizing gene transfer required for efficacy while remaining within safety limits. Therefore, the development of safe, effective, and clinically applicable gene delivery techniques for selected nonviral and viral vectors will certainly be invaluable in obtaining future regulatory approvals. The choice of gene transfer vector, dose level, and the delivery system are likely to be critical determinants of therapeutic efficacy. It is here that the interactions between vector uptake and trafficking, delivery route means, and the host's physical limits must be considered synergistically for a successful treatment course.
Humanlike agents with posture planning ability
NASA Astrophysics Data System (ADS)
Jung, Moon R.; Badler, Norman I.
1992-11-01
Human body models are geometric structures which may be ultimately controlled by kinematically manipulating their joints, but for animation, it is desirable to control them in terms of task-level goals. We address a fundamental problem in achieving task-level postural goals: controlling massively redundant degrees of freedom. We reduce the degrees of freedom by introducing significant control points and vectors, e.g., pelvis forward vector, palm up vector, and torso up vector, etc. This reduced set of parameters are used to enumerate primitive motions and motion dependencies among them, and thus to select from a small set of alternative postures (e.g., bend versus squat to lower shoulder height). A plan for a given goal is found by incrementally constructing a goal/constraint set based on the given goal, motion dependencies, collision avoidance requirements, and discovered failures. Global postures satisfying a given goal/constraint set are determined with the help of incremental mental simulation which uses a robust inverse kinematics algorithm. The contributions of the present work are: (1) There is no need to specify beforehand the final goal configuration, which is unrealistic for the human body, and (2) the degrees of freedom problem becomes easier by representing body configurations in terms of `lumped' control parameters, that is, control points and vectors.
Human-like agents with posture planning ability
NASA Technical Reports Server (NTRS)
Jung, Moon R.; Badler, Norman
1992-01-01
Human body models are geometric structures which may be ultimately controlled by kinematically manipulating their joints, but for animation, it is desirable to control them in terms of task-level goals. We address a fundamental problem in achieving task-level postural goals: controlling massively redundant degrees of freedom. We reduce the degrees of freedom by introducing significant control points and vectors, e.g., pelvis forward vector, palm up vector, and torso up vector, etc. This reduced set of parameters are used to enumerate primitive motions and motion dependencies among them, and thus to select from a small set of alternative postures (e.g., bend vs. squat to lower shoulder height). A plan for a given goal is found by incrementally constructing a goal/constraint set based on the given goal, motion dependencies, collision avoidance requirements, and discovered failures. Global postures satisfying a given goal/constraint set are determined with the help of incremental mental simulation which uses a robust inverse kinematics algorithm. The contributions of the present work are: (1) There is no need to specify beforehand the final goal configuration, which is unrealistic for the human body, and (2) the degrees of freedom problem becomes easier by representing body configurations in terms of 'lumped' control parameters, that is, control points and vectors.
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong
2012-03-01
Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.
Positive Discipline A to Z: 1001 Solutions to Everyday Parenting Problems.
ERIC Educational Resources Information Center
Nelsen, Jane; And Others
This book is a parenting reference work that offers background on common disciplinary problems and parenting issues, advice on how to handle problems and issues as they arise, and insight into how to avoid disciplinary problems in the future. The book is divided into three sections: Basic Positive Discipline Parenting Tools, Positive Discipline…
ERIC Educational Resources Information Center
Fernandez-Parra, A.; Lopez-Rubio, S.; Mata, S.; Calero, M. D.; Vives, M. C.; Carles, R.; Navarro, E.
2013-01-01
Introduction: Conduct problems arising in infancy are one of the main reasons for which parents seek psychological assistance. Although these problems usually begin when the child has started school, in recent years a group of children has been identified who begin to manifest such problems from their earliest infancy and whose prognosis seems to…
A new approach to impulsive rendezvous near circular orbit
NASA Astrophysics Data System (ADS)
Carter, Thomas; Humi, Mayer
2012-04-01
A new approach is presented for the problem of planar optimal impulsive rendezvous of a spacecraft in an inertial frame near a circular orbit in a Newtonian gravitational field. The total characteristic velocity to be minimized is replaced by a related characteristic-value function and this related optimization problem can be solved in closed form. The solution of this problem is shown to approach the solution of the original problem in the limit as the boundary conditions approach those of a circular orbit. Using a form of primer-vector theory the problem is formulated in a way that leads to relatively easy calculation of the optimal velocity increments. A certain vector that can easily be calculated from the boundary conditions determines the number of impulses required for solution of the optimization problem and also is useful in the computation of these velocity increments. Necessary and sufficient conditions for boundary conditions to require exactly three nonsingular non-degenerate impulses for solution of the related optimal rendezvous problem, and a means of calculating these velocity increments are presented. A simple example of a three-impulse rendezvous problem is solved and the resulting trajectory is depicted. Optimal non-degenerate nonsingular two-impulse rendezvous for the related problem is found to consist of four categories of solutions depending on the four ways the primer vector locus intersects the unit circle. Necessary and sufficient conditions for each category of solutions are presented. The region of the boundary values that admit each category of solutions of the related problem are found, and in each case a closed-form solution of the optimal velocity increments is presented. Similar results are presented for the simpler optimal rendezvous that require only one-impulse. For brevity degenerate and singular solutions are not discussed in detail, but should be presented in a following study. Although this approach is thought to provide simpler computations than existing methods, its main contribution may be in establishing a new approach to the more general problem.
Zhao, Yu-Xiang; Chou, Chien-Hsing
2016-01-01
In this study, a new feature selection algorithm, the neighborhood-relationship feature selection (NRFS) algorithm, is proposed for identifying rat electroencephalogram signals and recognizing Chinese characters. In these two applications, dependent relationships exist among the feature vectors and their neighboring feature vectors. Therefore, the proposed NRFS algorithm was designed for solving this problem. By applying the NRFS algorithm, unselected feature vectors have a high priority of being added into the feature subset if the neighboring feature vectors have been selected. In addition, selected feature vectors have a high priority of being eliminated if the neighboring feature vectors are not selected. In the experiments conducted in this study, the NRFS algorithm was compared with two feature algorithms. The experimental results indicated that the NRFS algorithm can extract the crucial frequency bands for identifying rat vigilance states and identifying crucial character regions for recognizing Chinese characters. PMID:27314346
Computational Investigation of Fluidic Counterflow Thrust Vectoring
NASA Technical Reports Server (NTRS)
Hunter, Craig A.; Deere, Karen A.
1999-01-01
A computational study of fluidic counterflow thrust vectoring has been conducted. Two-dimensional numerical simulations were run using the computational fluid dynamics code PAB3D with two-equation turbulence closure and linear Reynolds stress modeling. For validation, computational results were compared to experimental data obtained at the NASA Langley Jet Exit Test Facility. In general, computational results were in good agreement with experimental performance data, indicating that efficient thrust vectoring can be obtained with low secondary flow requirements (less than 1% of the primary flow). An examination of the computational flowfield has revealed new details about the generation of a countercurrent shear layer, its relation to secondary suction, and its role in thrust vectoring. In addition to providing new information about the physics of counterflow thrust vectoring, this work appears to be the first documented attempt to simulate the counterflow thrust vectoring problem using computational fluid dynamics.
A Novel Gradient Vector Flow Snake Model Based on Convex Function for Infrared Image Segmentation
Zhang, Rui; Zhu, Shiping; Zhou, Qin
2016-01-01
Infrared image segmentation is a challenging topic because infrared images are characterized by high noise, low contrast, and weak edges. Active contour models, especially gradient vector flow, have several advantages in terms of infrared image segmentation. However, the GVF (Gradient Vector Flow) model also has some drawbacks including a dilemma between noise smoothing and weak edge protection, which decrease the effect of infrared image segmentation significantly. In order to solve this problem, we propose a novel generalized gradient vector flow snakes model combining GGVF (Generic Gradient Vector Flow) and NBGVF (Normally Biased Gradient Vector Flow) models. We also adopt a new type of coefficients setting in the form of convex function to improve the ability of protecting weak edges while smoothing noises. Experimental results and comparisons against other methods indicate that our proposed snakes model owns better ability in terms of infrared image segmentation than other snakes models. PMID:27775660
New discretization and solution techniques for incompressible viscous flow problems
NASA Technical Reports Server (NTRS)
Gunzburger, M. D.; Nicolaides, R. A.; Liu, C. H.
1983-01-01
This paper considers several topics arising in the finite element solution of the incompressible Navier-Stokes equations. Specifically, the question of choosing finite element velocity/pressure spaces is addressed, particularly from the viewpoint of achieving stable discretizations leading to convergent pressure approximations. Following this, the role of artificial viscosity in viscous flow calculations is studied, emphasizing recent work by several researchers for the anisotropic case. The last section treats the problem of solving the nonlinear systems of equations which arise from the discretization. Time marching methods and classical iterative techniques, as well as some recent modifications are mentioned.
Balancing aggregation and smoothing errors in inverse models
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-01-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-06-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Responding to Adolescent Suicide.
ERIC Educational Resources Information Center
Phi Delta Kappa Educational Foundation, Bloomington, IN.
This publication is designed to help educators deal with the problems that arise after an adolescent's suicide. It recommends that teachers should be able to detect differences in students' responses to emotional problems. Following a preface and a brief review of the extent of the problem, the first chapter discusses which adolescents are…
Introducing the Hero Complex and the Mythic Iconic Pathway of Problem Gambling
ERIC Educational Resources Information Center
Nixon, Gary; Solowoniuk, Jason
2009-01-01
Early research into the motivations behind problem gambling reflected separate paradigms of thought splitting our understanding of the gambler into divergent categories. However, over the past 25 years, problem gambling is now best understood to arise from biological, environmental, social, and psychological processes, and is now encapsulated…
Esperanto and International Language Problems: A Research Bibliography.
ERIC Educational Resources Information Center
Tonkin, Humphrey R.
This bibliography is intended both for the researcher and for the occasional student of international language problems, particularly as these relate to the international language Esperanto. The book is divided into two main sections: Part One deals with problems arising from communication across national boundaries and the search for a solution…
Attitude Determination Using Two Vector Measurements
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1998-01-01
Many spacecraft attitude determination methods use exactly two vector measurements. The two vectors are typically the unit vector to the Sun and the Earth's magnetic field vector for coarse "sun-mag" attitude determination or unit vectors to two stars tracked by two star trackers for fine attitude determination. TRIAD, the earliest published algorithm for determining spacecraft attitude from two vector measurements, has been widely used in both ground-based and onboard attitude determination. Later attitude determination methods have been based on Wahba's optimality criterion for n arbitrarily weighted observations. The solution of Wahba's problem is somewhat difficult in the general case, but there is a simple closed-form solution in the two-observation case. This solution reduces to the TRIAD solution for certain choices of measurement weights. This paper presents and compares these algorithms as well as sub-optimal algorithms proposed by Bar-Itzhack, Harman, and Reynolds. Some new results will be presented, but the paper is primarily a review and tutorial.
Multiclass Reduced-Set Support Vector Machines
NASA Technical Reports Server (NTRS)
Tang, Benyang; Mazzoni, Dominic
2006-01-01
There are well-established methods for reducing the number of support vectors in a trained binary support vector machine, often with minimal impact on accuracy. We show how reduced-set methods can be applied to multiclass SVMs made up of several binary SVMs, with significantly better results than reducing each binary SVM independently. Our approach is based on Burges' approach that constructs each reduced-set vector as the pre-image of a vector in kernel space, but we extend this by recomputing the SVM weights and bias optimally using the original SVM objective function. This leads to greater accuracy for a binary reduced-set SVM, and also allows vectors to be 'shared' between multiple binary SVMs for greater multiclass accuracy with fewer reduced-set vectors. We also propose computing pre-images using differential evolution, which we have found to be more robust than gradient descent alone. We show experimental results on a variety of problems and find that this new approach is consistently better than previous multiclass reduced-set methods, sometimes with a dramatic difference.
Lyapunov vector function method in the motion stabilisation problem for nonholonomic mobile robot
NASA Astrophysics Data System (ADS)
Andreev, Aleksandr; Peregudova, Olga
2017-07-01
In this paper we propose a sampled-data control law in the stabilisation problem of nonstationary motion of nonholonomic mobile robot. We assume that the robot moves on a horizontal surface without slipping. The dynamical model of a mobile robot is considered. The robot has one front free wheel and two rear wheels which are controlled by two independent electric motors. We assume that the controls are piecewise constant signals. Controller design relies on the backstepping procedure with the use of Lyapunov vector-function method. Theoretical considerations are verified by numerical simulation.
Vectorization of a Monte Carlo simulation scheme for nonequilibrium gas dynamics
NASA Technical Reports Server (NTRS)
Boyd, Iain D.
1991-01-01
Significant improvement has been obtained in the numerical performance of a Monte Carlo scheme for the analysis of nonequilibrium gas dynamics through an implementation of the algorithm which takes advantage of vector hardware, as presently demonstrated through application to three different problems. These are (1) a 1D standing-shock wave; (2) the flow of an expanding gas through an axisymmetric nozzle; and (3) the hypersonic flow of Ar gas over a 3D wedge. Problem (3) is illustrative of the greatly increased number of molecules which the simulation may involve, thanks to improved algorithm performance.
Vector Potential Generation for Numerical Relativity Simulations
NASA Astrophysics Data System (ADS)
Silberman, Zachary; Faber, Joshua; Adams, Thomas; Etienne, Zachariah; Ruchlin, Ian
2017-01-01
Many different numerical codes are employed in studies of highly relativistic magnetized accretion flows around black holes. Based on the formalisms each uses, some codes evolve the magnetic field vector B, while others evolve the magnetic vector potential A, the two being related by the curl: B=curl(A). Here, we discuss how to generate vector potentials corresponding to specified magnetic fields on staggered grids, a surprisingly difficult task on finite cubic domains. The code we have developed solves this problem in two ways: a brute-force method, whose scaling is nearly linear in the number of grid cells, and a direct linear algebra approach. We discuss the success both algorithms have in generating smooth vector potential configurations and how both may be extended to more complicated cases involving multiple mesh-refinement levels. NSF ACI-1550436
Harnessing Integrated Vector Management for Enhanced Disease Prevention.
Chanda, Emmanuel; Ameneshewa, Birkinesh; Bagayoko, Magaran; Govere, John M; Macdonald, Michael B
2017-01-01
The increasing global threat of emerging and re-emerging vector-borne diseases (VBDs) poses a serious health problem. The World Health Organization (WHO) recommends integrated vector management (IVM) strategy for combating VBD transmission. An IVM approach requires entomological knowledge, technical and infrastructure capacity, and systems facilitating stakeholder collaboration. In sub-Saharan Africa, successful operational IVM experience comes from relatively few countries. This article provides an update on the extent to which IVM is official national policy, the degree of IVM implementation, the level of compliance with WHO guidelines, and concordance in the understanding of IVM, and it assesses the operational impact of IVM. The future outlook encompasses rational and sustainable use of effective vector control tools and inherent improved return for investment for disease vector control. Copyright © 2016 Elsevier Ltd. All rights reserved.
Scale-chiral symmetry, ω meson, and dense baryonic matter
NASA Astrophysics Data System (ADS)
Ma, Yong-Liang; Rho, Mannque
2018-05-01
It is shown that explicitly broken scale symmetry is essential for dense skyrmion matter in hidden local symmetry theory. Consistency with the vector manifestation fixed point for the hidden local symmetry of the lowest-lying vector mesons and the dilaton limit fixed point for scale symmetry in dense matter is found to require that the anomalous dimension (|γG2| ) of the gluon field strength tensor squared (G2 ) that represents the quantum trace anomaly should be 1.0 ≲|γG2|≲3.5 . The magnitude of |γG2| estimated here will be useful for studying hadron and nuclear physics based on the scale-chiral effective theory. More significantly, that the dilaton limit fixed point can be arrived at with γG2≠0 at some high density signals that scale symmetry can arise in dense medium as an "emergent" symmetry.
Some Quantum Symmetries and Their Breaking II
NASA Astrophysics Data System (ADS)
Selesnick, S. A.
2013-04-01
We consider symmetry breaking in the context of vector bundle theory, which arises quite naturally not only when attempting to "gauge" symmetry groups, but also as a means of localizing those global symmetry breaking effects known as spontaneous. We review such spontaneous symmetry breaking first for a simplified version of the Goldstone scenario for the case of global symmetries, and then in a localized form which is applied to a derivation of some of the phenomena associated with superconduction in both its forms, type I and type II. We then extend these procedures to effect the Higgs mechanism of electroweak theory, and finally we describe an extension to the flavor symmetries of the lightest quarks, including a brief discussion of CP-violation in the neutral kaon system. A largely self-contained primer of vector bundle theory is provided in Sect. 4, which supplies most of the results required thereafter.
Fermi arc plasmons in Weyl semimetals
NASA Astrophysics Data System (ADS)
Song, Justin C. W.; Rudner, Mark S.
2017-11-01
In the recently discovered Weyl semimetals, the Fermi surface may feature disjoint, open segments—the so-called Fermi arcs—associated with topological states bound to exposed crystal surfaces. Here we show that the collective dynamics of electrons near such surfaces sharply departs from that of a conventional three-dimensional metal. In magnetic systems with broken time reversal symmetry, the resulting Fermi arc plasmons (FAPs) are chiral, with dispersion relations featuring open, hyperbolic constant frequency contours. As a result, a large range of surface plasmon wave vectors can be supported at a given frequency, with corresponding group velocity vectors directed along a few specific collimated directions. Fermi arc plasmons can be probed using near-field photonics techniques, which may be used to launch highly directional, focused surface plasmon beams. The unusual characteristics of FAPs arise from the interplay of bulk and surface Fermi arc carrier dynamics and give a window into the unusual fermiology of Weyl semimetals.
A look into the Medical and Veterinary Entomology crystal ball.
Dantas-Torres, F; Cameron, M M; Colwell, D D; Otranto, D
2014-08-01
Medical and Veterinary Entomology (MVE) represents a leading periodical in its field and covers many aspects of the biology and control of insects, ticks, mites and other arthropods of medical and veterinary importance. Since the first issue of the journal, researchers working in both developed and developing countries have published in MVE, with direct impact on current knowledge in the field. An increasing number of articles dealing with the epidemiology and transmission of vector-borne pathogens have been published in MVE, reflecting rapid changes in vector distribution, pathogen transmission and host-arthropod interactions. This article represents a gaze into the crystal ball in which we identify areas of increasing interest, discuss the main changes that have occurred in the epidemiology of parasitic arthropods since the first issue of MVE, and predict the principal scientific topics that might arise in the next 25 years for scientists working in medical and veterinary entomology. © 2014 The Royal Entomological Society.
Stokes' theorem, gauge symmetry and the time-dependent Aharonov-Bohm effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macdougall, James, E-mail: jbm34@mail.fresnostate.edu; Singleton, Douglas, E-mail: dougs@csufresno.edu
2014-04-15
Stokes' theorem is investigated in the context of the time-dependent Aharonov-Bohm effect—the two-slit quantum interference experiment with a time varying solenoid between the slits. The time varying solenoid produces an electric field which leads to an additional phase shift which is found to exactly cancel the time-dependent part of the usual magnetic Aharonov-Bohm phase shift. This electric field arises from a combination of a non-single valued scalar potential and/or a 3-vector potential. The gauge transformation which leads to the scalar and 3-vector potentials for the electric field is non-single valued. This feature is connected with the non-simply connected topology ofmore » the Aharonov-Bohm set-up. The non-single valued nature of the gauge transformation function has interesting consequences for the 4-dimensional Stokes' theorem for the time-dependent Aharonov-Bohm effect. An experimental test of these conclusions is proposed.« less
Vector and tensor contributions to the curvature perturbation at second order
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrilho, Pedro; Malik, Karim A., E-mail: p.gregoriocarrilho@qmul.ac.uk, E-mail: k.malik@qmul.ac.uk
2016-02-01
We derive the evolution equation for the second order curvature perturbation using standard techniques of cosmological perturbation theory. We do this for different definitions of the gauge invariant curvature perturbation, arising from different splits of the spatial metric, and compare the expressions. The results are valid at all scales and include all contributions from scalar, vector and tensor perturbations, as well as anisotropic stress, with all our results written purely in terms of gauge invariant quantities. Taking the large-scale approximation, we find that a conserved quantity exists only if, in addition to the non-adiabatic pressure, the transverse traceless part ofmore » the anisotropic stress tensor is also negligible. We also find that the version of the gauge invariant curvature perturbation which is exactly conserved is the one defined with the determinant of the spatial part of the inverse metric.« less
Quantum and classical optics-emerging links
NASA Astrophysics Data System (ADS)
Eberly, J. H.; Qian, Xiao-Feng; Qasimi, Asma Al; Ali, Hazrat; Alonso, M. A.; Gutiérrez-Cuevas, R.; Little, Bethany J.; Howell, John C.; Malhotra, Tanya; Vamivakas, A. N.
2016-06-01
Quantum optics and classical optics are linked in ways that are becoming apparent as a result of numerous recent detailed examinations of the relationships that elementary notions of optics have with each other. These elementary notions include interference, polarization, coherence, complementarity and entanglement. All of them are present in both quantum and classical optics. They have historic origins, and at least partly for this reason not all of them have quantitative definitions that are universally accepted. This makes further investigation into their engagement in optics very desirable. We pay particular attention to effects that arise from the mere co-existence of separately identifiable and readily available vector spaces. Exploitation of these vector-space relationships are shown to have unfamiliar theoretical implications and new options for observation. It is our goal to bring emerging quantum-classical links into wider view and to indicate directions in which forthcoming and future work will promote discussion and lead to unified understanding.
NASA Technical Reports Server (NTRS)
Mcardle, Jack G.; Esker, Barbara S.
1993-01-01
Many conceptual designs for advanced short-takeoff, vertical landing (ASTOVL) aircraft need exhaust nozzles that can vector the jet to provide forces and moments for controlling the aircraft's movement or attitude in flight near the ground. A type of nozzle that can both vector the jet and vary the jet flow area is called a vane nozzle. Basically, the nozzle consists of parallel, spaced-apart flow passages formed by pairs of vanes (vanesets) that can be rotated on axes perpendicular to the flow. Two important features of this type of nozzle are the abilities to vector the jet rearward up to 45 degrees and to produce less harsh pressure and velocity footprints during vertical landing than does an equivalent single jet. A one-third-scale model of a generic vane nozzle was tested with unheated air at the NASA Lewis Research Center's Powered Lift Facility. The model had three parallel flow passages. Each passage was formed by a vaneset consisting of a long and a short vane. The longer vanes controlled the jet vector angle, and the shorter controlled the flow area. Nozzle performance for three nominal flow areas (basic and plus or minus 21 percent of basic area), each at nominal jet vector angles from -20 deg (forward of vertical) to +45 deg (rearward of vertical) are presented. The tests were made with the nozzle mounted on a model tailpipe with a blind flange on the end to simulate a closed cruise nozzle, at tailpipe-to-ambient pressure ratios from 1.8 to 4.0. Also included are jet wake data, single-vaneset vector performance for long/short and equal-length vane designs, and pumping capability. The pumping capability arises from the subambient pressure developed in the cavities between the vanesets, which could be used to aspirate flow from a source such as the engine compartment. Some of the performance characteristics are compared with characteristics of a single-jet nozzle previously reported.
NASA Astrophysics Data System (ADS)
Perez-Flores, P.; Veloso, E. E.; Cembrano, J. M.; Sánchez, P.; Iriarte, S.; Lohmar, S.
2013-12-01
Reorientation of mesoscopic faults, veins and fractures recovered from drilling is critical to construct reliable structural models that can account for their architecture and deformation regime. However, oriented cores are expensive and time consuming to drill. Some techniques achieve reorientation by introducing tools into the borehole. Problems arise when boreholes are unstable or collapse. One alternative technique allowing reorientation is to obtain reliable paleomagnetic vectors to reorient each core piece after drilling. Here, we present stable and reliable remnant magnetic vectors calculated from the Tol-1 core to analyze the geometry of the fracture network and its relationship to regional tectonic. Tol-1 core is a vertical, 1073 m deep geothermal well, drilled at the Tolhuaca Geothermal Field in the Southern Volcanic Zone of the Andes by MRP Geothermal Chile Ltda (formerly GGE Chile SpA) in 2009. The core consists of basaltic/andesitic volcanic rocks with subordinate pyroclastic/volcaniclastic units, with probable Pleistocene age. Fault planes with slickenlines and mineral fiber kinematic indicators are common in the upper 700 m of the core. Calcite, quartz and calcite-quartz veins are recognized along of entire core, whereas epidote-quartz and calcite-epidote veins occur in the last 350 m, minor chlorite, anhydrite and clay-minerals are present. Orientations of structural features in the core were measured with a goniometer using the core's axis and a false north for each piece; hence, orientation data has a false strike but a real dip. To achieve total reorientation of the pieces, we collected 200 standard-size paleomagnetic specimens, ensuring that at least four of them were recovered from continuous pieces. Thermal (up to 700°C) and alternating field demagnetization (up to 90mT on steps of 2mT) methods were used to isolate a stable remnant magnetization (RM) vector, and each technique yielded similar results. RM vectors were recovered between 0 to 25mT, and between 0 to 625°C. The declination of RM vectors was used to bring pieces to a common anchor orientation calculated through the Geocentric Axial Dipole Model (GAD). The paleomagnetic technique proved to be reliable to reorient the Tol-1 core. Structural analyses along the core show N50-60E-striking preferential vein orientation. In addition, N40-50E- and N60-70W-striking preferential fault orientations were identified. Kinematic analysis of fault-slip data shows a N60E-striking bulk fault plane solution with normal strain regime. The veins and faults orientation show strain axes compatible with published regional stress field (σmax N238E).
Huang, Ai-Mei; Nguyen, Truong
2009-04-01
In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.
Errors, Error, and Text in Multidialect Setting.
ERIC Educational Resources Information Center
Candler, W. J.
1979-01-01
This article discusses the various dialects of English spoken in Liberia and analyzes the problems of Liberian students in writing compositions in English. Errors arise mainly from differences in culture and cognition, not from superficial linguistic problems. (CFM)
Slooff, R
1987-12-01
The changing picture of malaria worldwide needs to be viewed in the context of other developments before we can determine the directions to take to be able to provide the thrusts required in malaria vector control. As a result of population growth, increasing urbanization and continuing pressure on scarce natural resources, the epidemiology of malaria and its manifestation as a public health problem are undergoing profound modifications, indeed in several parts of the world. This picture is further complicated by the spread of resistance to pesticides in the vector and to drugs in Plasmodium falciparum. In the immediate future, these trends will continue. In addition, the appearance of suitable vaccines is a highly probable event to be taken into consideration. The WHO Global Strategy of Health For All by the Year 2000 aims at the improvement of levels of health through primary health care. Among other things, this implies a greater reliance on community involvement and on intersectoral collaboration for health. In this light, the major malaria problems in the year 2000 will be: (1) "hard core" endemic areas with inadequate infrastructure and poor socio-economic development; (2) resource development areas, in particular those under illegal or poor controlled exploitation; (3) expanding urban areas and (4) increased mobility of non-immunes, particularly if uncontrolled. In order to cope with these problems, thrusts are required towards the development of vector control strategies, covering the following fields: (1) tools for vector control integrated in primary health care, (2) new chemicals, (3) improved and new biologicals, (4) environmental management and the adoption of health safeguards in resource development projects and (5) manpower development.
NASA Technical Reports Server (NTRS)
Kumar, A.
1984-01-01
A computer program NASCRIN has been developed for analyzing two-dimensional flow fields in high-speed inlets. It solves the two-dimensional Euler or Navier-Stokes equations in conservation form by an explicit, two-step finite-difference method. An explicit-implicit method can also be used at the user's discretion for viscous flow calculations. For turbulent flow, an algebraic, two-layer eddy-viscosity model is used. The code is operational on the CDC CYBER 203 computer system and is highly vectorized to take full advantage of the vector-processing capability of the system. It is highly user oriented and is structured in such a way that for most supersonic flow problems, the user has to make only a few changes. Although the code is primarily written for supersonic internal flow, it can be used with suitable changes in the boundary conditions for a variety of other problems.
Parallel/Vector Integration Methods for Dynamical Astronomy
NASA Astrophysics Data System (ADS)
Fukushima, T.
Progress of parallel/vector computers has driven us to develop suitable numerical integrators utilizing their computational power to the full extent while being independent on the size of system to be integrated. Unfortunately, the parallel version of Runge-Kutta type integrators are known to be not so efficient. Recently we developed a parallel version of the extrapolation method (Ito and Fukushima 1997), which allows variable timesteps and still gives an acceleration factor of 3-4 for general problems. While the vector-mode usage of Picard-Chebyshev method (Fukushima 1997a, 1997b) will lead the acceleration factor of order of 1000 for smooth problems such as planetary/satellites orbit integration. The success of multiple-correction PECE mode of time-symmetric implicit Hermitian integrator (Kokubo 1998) seems to enlighten Milankar's so-called "pipelined predictor corrector method", which is expected to lead an acceleration factor of 3-4. We will review these directions and discuss future prospects.
Using Grid Cells for Navigation
Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil
2015-01-01
Summary Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this “vector navigation” relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. PMID:26247860
Research and simulation of the decoupling transformation in AC motor vector control
NASA Astrophysics Data System (ADS)
He, Jiaojiao; Zhao, Zhongjie; Liu, Ken; Zhang, Yongping; Yao, Tuozhong
2018-04-01
Permanent magnet synchronous motor (PMSM) is a nonlinear, strong coupling, multivariable complex object, and transformation decoupling can solve the coupling problem of permanent magnet synchronous motor. This paper gives a permanent magnet synchronous motor (PMSM) mathematical model, introduces the permanent magnet synchronous motor vector control coordinate transformation in the process of modal matrix inductance matrix transform through the matrix related knowledge of different coordinates of diagonalization, which makes the coupling between the independent, realize the control of motor current and excitation the torque current coupling separation, and derived the coordinate transformation matrix, the thought to solve the coupling problem of AC motor. Finally, in the Matlab/Simulink environment, through the establishment and combination between the PMSM ontology, coordinate conversion module, built the simulation model of permanent magnet synchronous motor vector control, introduces the model of each part, and analyzed the simulation results.
Laplace-Runge-Lenz vector in quantum mechanics in noncommutative space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gáliková, Veronika; Kováčik, Samuel; Prešnajder, Peter
2013-12-15
The main point of this paper is to examine a “hidden” dynamical symmetry connected with the conservation of Laplace-Runge-Lenz vector (LRL) in the hydrogen atom problem solved by means of non-commutative quantum mechanics (NCQM). The basic features of NCQM will be introduced to the reader, the key one being the fact that the notion of a point, or a zero distance in the considered configuration space, is abandoned and replaced with a “fuzzy” structure in such a way that the rotational invariance is preserved. The main facts about the conservation of LRL vector in both classical and quantum theory willmore » be reviewed. Finally, we will search for an analogy in the NCQM, provide our results and their comparison with the QM predictions. The key notions we are going to deal with are non-commutative space, Coulomb-Kepler problem, and symmetry.« less
Lee, Kai-Hui; Chiu, Pei-Ling
2013-10-01
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.
New fuzzy support vector machine for the class imbalance problem in medical datasets classification.
Gu, Xiaoqing; Ni, Tongguang; Wang, Hongyuan
2014-01-01
In medical datasets classification, support vector machine (SVM) is considered to be one of the most successful methods. However, most of the real-world medical datasets usually contain some outliers/noise and data often have class imbalance problems. In this paper, a fuzzy support machine (FSVM) for the class imbalance problem (called FSVM-CIP) is presented, which can be seen as a modified class of FSVM by extending manifold regularization and assigning two misclassification costs for two classes. The proposed FSVM-CIP can be used to handle the class imbalance problem in the presence of outliers/noise, and enhance the locality maximum margin. Five real-world medical datasets, breast, heart, hepatitis, BUPA liver, and pima diabetes, from the UCI medical database are employed to illustrate the method presented in this paper. Experimental results on these datasets show the outperformed or comparable effectiveness of FSVM-CIP.
Three-dimensional elasticity solution of an infinite plate with a circular hole
NASA Technical Reports Server (NTRS)
Delale, F.; Erdogan, F.
1982-01-01
The elasticity problem for a thick plate with a circular hole is formulated in a systematic fashion by using the z-component of the Galerkin vector and that of Muki's harmonic vector function. The problem was originally solved by Alblas. The reasons for reconsidering it are to develop a technique which may be used in solving the elasticity problem for a multilayered plate and to verify and extend the results given by Alblas. The problem is reduced to an infinite system of algebraic equations which is solved by the method of reduction. Various stress components are tabulated as functions of a/h, z/h, r/a, and nu, a and 2h being the radius of the hole and the plate thickness and nu, the Poisson's ratio. The significant effect of the Poisson's ratio on the behavior and the magnitude of the stresses is discussed.
Hemmelmayr, Vera C.; Cordeau, Jean-François; Crainic, Teodor Gabriel
2012-01-01
In this paper, we propose an adaptive large neighborhood search heuristic for the Two-Echelon Vehicle Routing Problem (2E-VRP) and the Location Routing Problem (LRP). The 2E-VRP arises in two-level transportation systems such as those encountered in the context of city logistics. In such systems, freight arrives at a major terminal and is shipped through intermediate satellite facilities to the final customers. The LRP can be seen as a special case of the 2E-VRP in which vehicle routing is performed only at the second level. We have developed new neighborhood search operators by exploiting the structure of the two problem classes considered and have also adapted existing operators from the literature. The operators are used in a hierarchical scheme reflecting the multi-level nature of the problem. Computational experiments conducted on several sets of instances from the literature show that our algorithm outperforms existing solution methods for the 2E-VRP and achieves excellent results on the LRP. PMID:23483764
Hemmelmayr, Vera C; Cordeau, Jean-François; Crainic, Teodor Gabriel
2012-12-01
In this paper, we propose an adaptive large neighborhood search heuristic for the Two-Echelon Vehicle Routing Problem (2E-VRP) and the Location Routing Problem (LRP). The 2E-VRP arises in two-level transportation systems such as those encountered in the context of city logistics. In such systems, freight arrives at a major terminal and is shipped through intermediate satellite facilities to the final customers. The LRP can be seen as a special case of the 2E-VRP in which vehicle routing is performed only at the second level. We have developed new neighborhood search operators by exploiting the structure of the two problem classes considered and have also adapted existing operators from the literature. The operators are used in a hierarchical scheme reflecting the multi-level nature of the problem. Computational experiments conducted on several sets of instances from the literature show that our algorithm outperforms existing solution methods for the 2E-VRP and achieves excellent results on the LRP.
ERIC Educational Resources Information Center
Belov, Dmitry I.
2008-01-01
In educational practice, a test assembly problem is formulated as a system of inequalities induced by test specifications. Each solution to the system is a test, represented by a 0-1 vector, where each element corresponds to an item included (1) or not included (0) into the test. Therefore, the size of a 0-1 vector equals the number of items "n"…
On the Partitioning of Squared Euclidean Distance and Its Applications in Cluster Analysis.
ERIC Educational Resources Information Center
Carter, Randy L.; And Others
1989-01-01
The partitioning of squared Euclidean--E(sup 2)--distance between two vectors in M-dimensional space into the sum of squared lengths of vectors in mutually orthogonal subspaces is discussed. Applications to specific cluster analysis problems are provided (i.e., to design Monte Carlo studies for performance comparisons of several clustering methods…
ERIC Educational Resources Information Center
Mikula, Brendon D.; Heckler, Andrew F.
2017-01-01
We propose a framework for improving accuracy, fluency, and retention of basic skills essential for solving problems relevant to STEM introductory courses, and implement the framework for the case of basic vector math skills over several semesters in an introductory physics course. Using an iterative development process, the framework begins with…
Acquisition and Reduction Procedures for MOF Doppler-Magnetograms. [solar observation
NASA Technical Reports Server (NTRS)
Cacciani, Alessandro; Ricci, D.; Rosati, P.; Rhodes, Edward J., Jr.; Smith, E.; Tomczyk, Steven; Ulrich, Roger K.
1988-01-01
Defects in the first magneto-optical filter (MOF) magnetograms, particularly the problem of the apparent contamination between velocity and magnetic fields, are discussed. It is found that a correct acquisition and reduction procedure gives cleaner results. A vector magnetograph is suggested. The vector field at coronal levels is calculated, using one MOF longitudinal magnetogram.
West Nile Virus Fitness Costs in Different Mosquito Species.
Coffey, Lark L; Reisen, William K
2016-06-01
West Nile virus (WNV) remains an important public health problem causing annual epidemics in the United States. Grubaugh et al. observed that WNV genetic divergence is dependent on the vector mosquito species. This suggests that specific WNV vector-bird species pairings may generate novel genotypes that could promote outbreaks. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sirunyan, Albert M; et al.
A search is presented for new high-mass resonances decaying into electron or muon pairs. The search uses proton-proton collision data at a centre-of-mass energy of 13 TeV collected by the CMS experiment at the LHC in 2016, corresponding to an integrated luminosity of 36 fbmore » $$^{-1}$$. Observations are in agreement with standard model expectations. Upper limits on the product of a new resonance production cross section and branching fraction to dileptons are calculated in a model-independent manner. This permits the interpretation of the limits in models predicting a narrow dielectron or dimuon resonance. A scan of different intrinsic width hypotheses is performed. Limits are set on the masses of various hypothetical particles. For the Z$$'_\\mathrm{SSM}$$ (Z$$'_{\\psi}$$) particle, which arises in the sequential standard model (superstring-inspired model), a lower mass limit of 4.50 (3.90) TeV is set at 95% confidence level. The lightest Kaluza-Klein graviton arising in the Randall-Sundrum model of extra dimensions, with coupling parameters $$k/\\overline{M}_\\mathrm{Pl}$$ of 0.01, 0.05, and 0.10, is excluded at 95% confidence level below 2.10 , 3.65 , and 4.25 TeV, respectively. In a simplified model of dark matter production via a vector or axial vector mediator, limits at 95% confidence level are obtained on the masses of the dark matter particle and its mediator.« less
ERIC Educational Resources Information Center
Khotimah, Rita Pramujiyanti; Masduki
2016-01-01
Differential equations is a branch of mathematics which is closely related to mathematical modeling that arises in real-world problems. Problem solving ability is an essential component to solve contextual problem of differential equations properly. The purposes of this study are to describe contextual teaching and learning (CTL) model in…
Kiryu, Tohru; Yamada, Hiroshi; Jimbo, Masahiro; Bando, Takehiko
2004-01-01
Virtual reality (VR) is a promising technology in biomedical engineering, but at the same time enlarges another problem called cybersickness. Aiming at suppression of cybersicknes, we are investigating the influences of vection-induced images on the autonomic regulation quantitatively. We used the motion vectors to quantify image scenes and measured electrocardiogram, blood pressure, and respiration for evaluating the autonomic regulation. Using the estimated motion vectors, we further synthesized random-dot pattern images to survey which component of the global motion vectors seriously affected the autonomic regulation. The results showed that the zoom component with a specific frequency band (0.1-3.0 Hz) would induce sickness.
Wang, Zhen; Li, Ru; Yu, Guolin
2017-01-01
In this work, several extended approximately invex vector-valued functions of higher order involving a generalized Jacobian are introduced, and some examples are presented to illustrate their existences. The notions of higher-order (weak) quasi-efficiency with respect to a function are proposed for a multi-objective programming. Under the introduced generalization of higher-order approximate invexities assumptions, we prove that the solutions of generalized vector variational-like inequalities in terms of the generalized Jacobian are the generalized quasi-efficient solutions of nonsmooth multi-objective programming problems. Moreover, the equivalent conditions are presented, namely, a vector critical point is a weakly quasi-efficient solution of higher order with respect to a function.
Parham, Paul E.; Waldock, Joanna; Christophides, George K.; ...
2015-02-16
Arguably one of the most important effects of climate change is the potential impact on human health. While this is likely to take many forms, the implications for future transmission of vector-borne diseases (VBDs), given their ongoing contribution to global disease burden, are both extremely important and highly uncertain. In part, this is due not only to data limitations and methodological challenges when integrating climate-driven VBD models and climate change projections, but, perhaps most crucially, the multitude of epidemiological, ecological, and socioeconomic factors that drive VBD transmission, and this complexity has generated considerable debate over the last 10-15 years. Inmore » this article, and Theme Issue, we seek to elucidate current knowledge around this topic, identify key themes and uncertainties, evaluate ongoing challenges and open research questions, and, crucially, offer some solutions for the field moving forwards. Although many of these challenges are ubiquitous across multiple VBDs, more specific issues also arise in different vector-pathogen systems. This Theme Issue seeks to cover both, reflected in the breadth and depth of the topics and VBD-systems considered, itself strongly indicative of the challenging, but necessary, multidisciplinary nature of this research field.« less
Mukherjee, Sayandip; Thrasher, Adrian J
2014-01-01
Gene therapy presents an attractive alternative to allogeneic haematopoietic stem cell transplantation (HSCT) for treating patients suffering from primary immunodeficiency disorder (PID). The conceptual advantage of gene correcting a patient's autologous HSCs lies in minimizing or completely avoiding immunological complications arising from allogeneic transplantation while conferring the same benefits of immune reconstitution upon long-term engraftment. Clinical trials targeting X-linked chronic granulomatous disorder (X-CGD) have shown promising results in this context. However, long-term clinical benefits in these patients have been limited by issues of poor engraftment of gene-transduced cells coupled with transgene silencing and vector induced clonal proliferation. Novel vectors incorporating safety features such as self-inactivating (SIN) mutations in the long terminal repeats (LTRs) along with synthetic promoters driving lineage-restricted sustainable expression of the gp91phox transgene are expected to resolve the current pitfalls and require rigorous preclinical testing. In this chapter, we have outlined a protocol in which X-CGD mouse model derived induced pluripotent stem cells (iPSCs) have been utilized to develop a platform for investigating the efficacy and safety profiles of novel vectors prior to clinical evaluation.
Quantitative tissue polarimetry using polar decomposition of 3 x 3 Mueller matrix
NASA Astrophysics Data System (ADS)
Swami, M. K.; Manhas, S.; Buddhiwant, P.; Ghosh, N.; Uppal, A.; Gupta, P. K.
2007-05-01
Polarization properties of any optical system are completely described by a sixteen-element (4 x 4) matrix called Mueller matrix, which transform the Stokes vector describing the polarization properties of incident light to the stokes vector of scattered light. Measurement of all the elements of the matrix requires a minimum of sixteen measurements involving both linear and circularly polarized light. However, for many diagnostic applications, it would be useful if all the polarization parameters of the medium (depolarization (Δ), differential attenuation of two orthogonal polarizations, that is, diattenuation (d), and differential phase retardance of two orthogonal polarizations, i.e., retardance (δ )) can be quantified with linear polarization measurements alone. In this paper we show that for a turbid medium, like biological tissue, where the depolarization of linearly polarized light arises primarily due to the randomization of the field vector's direction by multiple scattering, the polarization parameters of the medium can be obtained from the nine Mueller matrix elements involving linear polarization measurements only. Use of the approach for measurement of polarization parameters (Δ, d and δ) of normal and malignant (squamous cell carcinoma) tissues resected from human oral cavity are presented.
Higher order sensitivity of solutions to convex programming problems without strict complementarity
NASA Technical Reports Server (NTRS)
Malanowski, Kazimierz
1988-01-01
Consideration is given to a family of convex programming problems which depend on a vector parameter. It is shown that the solutions of the problems and the associated Lagrange multipliers are arbitrarily many times directionally differentiable functions of the parameter, provided that the data of the problems are sufficiently regular. The characterizations of the respective derivatives are given.
NASA Technical Reports Server (NTRS)
Mullenmeister, Paul
1988-01-01
The quasi-geostrophic omega-equation in flux form is developed as an example of a Poisson problem over a spherical shell. Solutions of this equation are obtained by applying a two-parameter Chebyshev solver in vector layout for CDC 200 series computers. The performance of this vectorized algorithm greatly exceeds the performance of its scalar analog. The algorithm generates solutions of the omega-equation which are compared with the omega fields calculated with the aid of the mass continuity equation.
Bayesian source term determination with unknown covariance of measurements
NASA Astrophysics Data System (ADS)
Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav
2017-04-01
Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
NASA Astrophysics Data System (ADS)
Zhou, Yajun
This thesis employs the topological concept of compactness to deduce robust solutions to two integral equations arising from chemistry and physics: the inverse Laplace problem in chemical kinetics and the vector wave scattering problem in dielectric optics. The inverse Laplace problem occurs in the quantitative understanding of biological processes that exhibit complex kinetic behavior: different subpopulations of transition events from the "reactant" state to the "product" state follow distinct reaction rate constants, which results in a weighted superposition of exponential decay modes. Reconstruction of the rate constant distribution from kinetic data is often critical for mechanistic understandings of chemical reactions related to biological macromolecules. We devise a "phase function approach" to recover the probability distribution of rate constants from decay data in the time domain. The robustness (numerical stability) of this reconstruction algorithm builds upon the continuity of the transformations connecting the relevant function spaces that are compact metric spaces. The robust "phase function approach" not only is useful for the analysis of heterogeneous subpopulations of exponential decays within a single transition step, but also is generalizable to the kinetic analysis of complex chemical reactions that involve multiple intermediate steps. A quantitative characterization of the light scattering is central to many meteoro-logical, optical, and medical applications. We give a rigorous treatment to electromagnetic scattering on arbitrarily shaped dielectric media via the Born equation: an integral equation with a strongly singular convolution kernel that corresponds to a non-compact Green operator. By constructing a quadratic polynomial of the Green operator that cancels out the kernel singularity and satisfies the compactness criterion, we reveal the universality of a real resonance mode in dielectric optics. Meanwhile, exploiting the properties of compact operators, we outline the geometric and physical conditions that guarantee a robust solution to the light scattering problem, and devise an asymptotic solution to the Born equation of electromagnetic scattering for arbitrarily shaped dielectric in a non-perturbative manner.
Neighboring extremals of dynamic optimization problems with path equality constraints
NASA Technical Reports Server (NTRS)
Lee, A. Y.
1988-01-01
Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.
Chemistry and the Internal Combustion Engine II: Pollution Problems.
ERIC Educational Resources Information Center
Hunt, C. B.
1979-01-01
Discusses pollution problems which arise from the use of internal combustion (IC) engines in the United Kingdom (UK). The IC engine exhaust emissions, controlling IC engine pollution in the UK, and some future developments are also included. (HM)
A Spectral Algorithm for Envelope Reduction of Sparse Matrices
NASA Technical Reports Server (NTRS)
Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.
1993-01-01
The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.
NASA Technical Reports Server (NTRS)
Tsiveriotis, K.; Brown, R. A.
1993-01-01
A new method is presented for the solution of free-boundary problems using Lagrangian finite element approximations defined on locally refined grids. The formulation allows for direct transition from coarse to fine grids without introducing non-conforming basis functions. The calculation of elemental stiffness matrices and residual vectors are unaffected by changes in the refinement level, which are accounted for in the loading of elemental data to the global stiffness matrix and residual vector. This technique for local mesh refinement is combined with recently developed mapping methods and Newton's method to form an efficient algorithm for the solution of free-boundary problems, as demonstrated here by sample calculations of cellular interfacial microstructure during directional solidification of a binary alloy.
ERIC Educational Resources Information Center
Vivaldi, Gonzalo Martin
1975-01-01
This article discusses the problems that arise with the formation of plural forms of surnames in Spanish, problems both with morphology and with ambiguity. Suggestions as to how to lessen problems are made. (Text is in Spanish.) (CLK)
NASA Astrophysics Data System (ADS)
Masterton, S. M.; Gubbins, D.; Müller, D.; Williams, S.
2013-12-01
The lithospheric contribution to the geomagnetic field arises from magnetised rocks that are cooler than the Curie temperature of their constituent minerals. Inversion of the magnetic field for this magnetisation is subject to inherent non-uniqueness, as many magnetisation distributions yield no potential field outside of the lithosphere. Such distributions are termed annihilators. We use a complete set of orthogonal vector spherical harmonics that separate the part of the magnetisation responsible for the magnetic field observed above the Earth's surface from the annihilators. A similar set of vector harmonics has been developed in Cartesian geometry suitable for small scale, industrial applications. In an attempt to quantify the significance of the annihilators, we first construct a global model of vertically integrated magnetisation (VIM) by combining a model of remanent magnetisation for the oceans with a previous model of induced magnetisation for the whole Earth. Remanence is computed by assigning magnetisations to the oceanic lithosphere acquired at the location and time of formation. The magnetising field is assumed to be an axial dipole that switches polarity with the reversal time scale. The magnetisation evolves with time by decay of thermal remanence and acquisition of chemical remanence. Remanence directions are calculated by implementing finite rotations of the original geomagnetic field direction with respect to an absolute reference frame. We then represent our estimated VIM in terms of vector spherical harmonics, to allow us to evaluate its relative contributions to a potential field that is observable outside of the lithosphere and to fields (both potential and non-potential) that are not observable. This analysis shows that our model of magnetisation is dominated by a part of the magnetisation that produces a potential field restricted to Earth's sub-lithospheric interior; it therefore contributes significantly to the huge null space in the inversion of lithospheric magnetic anomaly data for VIM. We calculate the observable potential field that arises from our magnetisation estimates and compare it with a model that is based upon satellite data (MF7); this allows us to evaluate our magnetisation estimates and suggest likely sources of error in areas with high misfit between our predictions and the observed magnetic field. For example, under-prediction of the observed magnetic field may be indicative of poorly-known magnetisation deep in the crust or upper mantle, locally underplated continental lithosphere or anomalous oceanic crust.
A Stochastic Employment Problem
ERIC Educational Resources Information Center
Wu, Teng
2013-01-01
The Stochastic Employment Problem(SEP) is a variation of the Stochastic Assignment Problem which analyzes the scenario that one assigns balls into boxes. Balls arrive sequentially with each one having a binary vector X = (X[subscript 1], X[subscript 2],...,X[subscript n]) attached, with the interpretation being that if X[subscript i] = 1 the ball…
Three-Dimensional Profiles Using a Spherical Cutting Bit: Problem Solving in Practice
ERIC Educational Resources Information Center
Ollerton, Richard L.; Iskov, Grant H.; Shannon, Anthony G.
2002-01-01
An engineering problem concerned with relating the coordinates of the centre of a spherical cutting tool to the actual cutting surface leads to a potentially rich example of problem-solving techniques. Basic calculus, Lagrange multipliers and vector calculus techniques are employed to produce solutions that may be compared to better understand…
NASA Astrophysics Data System (ADS)
Chen, Zhen; Chan, Tommy H. T.
2017-08-01
This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.
Viruses vector control proposal: genus Aedes emphasis.
Reis, Nelson Nogueira; Silva, Alcino Lázaro da; Reis, Elma Pereira Guedes; Silva, Flávia Chaves E; Reis, Igor Guedes Nogueira
The dengue fever is a major public health problem in the world. In Brazil, in 2015, there were 1,534,932 cases, being 20,320 cases of severe form, and 811 deaths related to this disease. The distribution of Aedes aegypti, the vector, is extensive. Recently, Zika and Chikungunya viruses had arisen, sharing the same vector as dengue and became a huge public health issue. Without specific treatment, it is urgently required as an effective vector control. This article is focused on reviewing vector control strategies, their effectiveness, viability and economical impact. Among all, the Sterile Insect Technique is highlighted as the best option to be adopted in Brazil, once it is largely effectively used in the USA and Mexico for plagues related to agribusiness. Copyright © 2017 Sociedade Brasileira de Infectologia. Published by Elsevier Editora Ltda. All rights reserved.
A hybrid structure for the storage and manipulation of very large spatial data sets
Peuquet, Donna J.
1982-01-01
The map data input and output problem for geographic information systems is rapidly diminishing with the increasing availability of mass digitizing, direct spatial data capture and graphics hardware based on raster technology. Although a large number of efficient raster-based algorithms exist for performing a wide variety of common tasks on these data, there are a number of procedures which are more efficiently performed in vector mode or for which raster mode equivalents of current vector-based techniques have not yet been developed. This paper presents a hybrid spatial data structure, named the ?vaster' structure, which can utilize the advantages of both raster and vector structures while potentially eliminating, or greatly reducing, the need for raster-to-vector and vector-to-raster conversion. Other advantages of the vaster structure are also discussed.
Aerial images visual localization on a vector map using color-texture segmentation
NASA Astrophysics Data System (ADS)
Kunina, I. A.; Teplyakov, L. M.; Gladkov, A. P.; Khanipov, T. M.; Nikolaev, D. P.
2018-04-01
In this paper we study the problem of combining UAV obtained optical data and a coastal vector map in absence of satellite navigation data. The method is based on presenting the territory as a set of segments produced by color-texture image segmentation. We then find such geometric transform which gives the best match between these segments and land and water areas of the georeferenced vector map. We calculate transform consisting of an arbitrary shift relatively to the vector map and bound rotation and scaling. These parameters are estimated using the RANSAC algorithm which matches the segments contours and the contours of land and water areas of the vector map. To implement this matching we suggest computing shape descriptors robust to rotation and scaling. We performed numerical experiments demonstrating the practical applicability of the proposed method.
Robust Vision-Based Pose Estimation Algorithm for AN Uav with Known Gravity Vector
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2016-06-01
Accurate estimation of camera external orientation with respect to a known object is one of the central problems in photogrammetry and computer vision. In recent years this problem is gaining an increasing attention in the field of UAV autonomous flight. Such application requires a real-time performance and robustness of the external orientation estimation algorithm. The accuracy of the solution is strongly dependent on the number of reference points visible on the given image. The problem only has an analytical solution if 3 or more reference points are visible. However, in limited visibility conditions it is often needed to perform external orientation with only 2 visible reference points. In such case the solution could be found if the gravity vector direction in the camera coordinate system is known. A number of algorithms for external orientation estimation for the case of 2 known reference points and a gravity vector were developed to date. Most of these algorithms provide analytical solution in the form of polynomial equation that is subject to large errors in the case of complex reference points configurations. This paper is focused on the development of a new computationally effective and robust algorithm for external orientation based on positions of 2 known reference points and a gravity vector. The algorithm implementation for guidance of a Parrot AR.Drone 2.0 micro-UAV is discussed. The experimental evaluation of the algorithm proved its computational efficiency and robustness against errors in reference points positions and complex configurations.
Improvements on ν-Twin Support Vector Machine.
Khemchandani, Reshma; Saigal, Pooja; Chandra, Suresh
2016-07-01
In this paper, we propose two novel binary classifiers termed as "Improvements on ν-Twin Support Vector Machine: Iν-TWSVM and Iν-TWSVM (Fast)" that are motivated by ν-Twin Support Vector Machine (ν-TWSVM). Similar to ν-TWSVM, Iν-TWSVM determines two nonparallel hyperplanes such that they are closer to their respective classes and are at least ρ distance away from the other class. The significant advantage of Iν-TWSVM over ν-TWSVM is that Iν-TWSVM solves one smaller-sized Quadratic Programming Problem (QPP) and one Unconstrained Minimization Problem (UMP); as compared to solving two related QPPs in ν-TWSVM. Further, Iν-TWSVM (Fast) avoids solving a smaller sized QPP and transforms it as a unimodal function, which can be solved using line search methods and similar to Iν-TWSVM, the other problem is solved as a UMP. Due to their novel formulation, the proposed classifiers are faster than ν-TWSVM and have comparable generalization ability. Iν-TWSVM also implements structural risk minimization (SRM) principle by introducing a regularization term, along with minimizing the empirical risk. The other properties of Iν-TWSVM, related to support vectors (SVs), are similar to that of ν-TWSVM. To test the efficacy of the proposed method, experiments have been conducted on a wide range of UCI and a skewed variation of NDC datasets. We have also given the application of Iν-TWSVM as a binary classifier for pixel classification of color images. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xianqiang, He; Delu, Pan; Yan, Bai; Qiankun, Zhu
2005-10-01
The numerical model of the vector radiative transfer of the coupled ocean-atmosphere system is developed based on the matrix-operator method, which is named PCOART. In PCOART, using the Fourier analysis, the vector radiative transfer equation (VRTE) splits up into a set of independent equations with zenith angle as only angular coordinate. Using the Gaussian-Quadrature method, VRTE is finally transferred into the matrix equation, which is calculated by using the adding-doubling method. According to the reflective and refractive properties of the ocean-atmosphere interface, the vector radiative transfer numerical model of ocean and atmosphere is coupled in PCOART. By comparing with the exact Rayleigh scattering look-up-table of MODIS(Moderate-resolution Imaging Spectroradiometer), it is shown that PCOART is an exact numerical calculation model, and the processing methods of the multi-scattering and polarization are correct in PCOART. Also, by validating with the standard problems of the radiative transfer in water, it is shown that PCOART could be used to calculate the underwater radiative transfer problems. Therefore, PCOART is a useful tool to exactly calculate the vector radiative transfer of the coupled ocean-atmosphere system, which can be used to study the polarization properties of the radiance in the whole ocean-atmosphere system and the remote sensing of the atmosphere and ocean.
Katz, Michael G.; Fargnoli, Anthony S.; Williams, Richard D.
2013-01-01
Abstract Gene therapy is one of the most promising fields for developing new treatments for the advanced stages of ischemic and monogenetic, particularly autosomal or X-linked recessive, cardiomyopathies. The remarkable ongoing efforts in advancing various targets have largely been inspired by the results that have been achieved in several notable gene therapy trials, such as the hemophilia B and Leber's congenital amaurosis. Rate-limiting problems preventing successful clinical application in the cardiac disease area, however, are primarily attributable to inefficient gene transfer, host responses, and the lack of sustainable therapeutic transgene expression. It is arguable that these problems are directly correlated with the choice of vector, dose level, and associated cardiac delivery approach as a whole treatment system. Essentially, a delicate balance exists in maximizing gene transfer required for efficacy while remaining within safety limits. Therefore, the development of safe, effective, and clinically applicable gene delivery techniques for selected nonviral and viral vectors will certainly be invaluable in obtaining future regulatory approvals. The choice of gene transfer vector, dose level, and the delivery system are likely to be critical determinants of therapeutic efficacy. It is here that the interactions between vector uptake and trafficking, delivery route means, and the host's physical limits must be considered synergistically for a successful treatment course. PMID:24164239
Synovial sarcoma of the neck associated with previous head and neck radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mischler, N.E.; Chuprevich, T.; Tormey, D.C.
1978-08-01
Synovial sarcoma is a rare neoplasm that uncommonly arises in the neck. Fourteen years after facial and neck radiation therapy for acne, synovial sarcoma of the neck developed in a young man. Possible radiation-induced benign and malignant neoplasms that arise in the head and neck region, either of thyroid or extrathyroid origin, remain a continuing medical problem.
Obesity: a problem of darwinian proportions?
Watnick, Suzanne
2006-10-01
Obesity has been described as an abnormality arising from the evolution of man, who becomes fat during the time of perpetual plenty. From the perspective of "Darwinian Medicine," if famine is avoided, obesity will prevail. Problems regarding obesity arise within many disciplines, including socioeconomic environments, the educational system, science, law, and government. This article will discuss various ethical aspects of several disciplines regarding obesity, with a focus on scientific inquiry. We will discuss this within the categories: (1) chronic kidney disease predialysis, (2) dialysis, and (3) renal transplantation. This article aims to help nephrologists and their patients navigate through the ethical aspects of obesity and chronic kidney disease.
A novel profit-allocation strategy for SDN enterprises
NASA Astrophysics Data System (ADS)
Hu, Wei; Hou, Ye; Tian, Longwei; Li, Yuan
2017-01-01
Aiming to solve the problem of profit allocation for supply and demand network (SDN) enterprises that ignores risk factors and generates low satisfaction, a novel profit-allocation model based on cooperative game theory and TOPSIS is proposed. This new model avoids the defect of the single-profit allocation model by introducing risk factors, compromise coefficients and high negotiation points. By measuring the Euclidean distance between the ideal solution vector and the negative ideal solution vector, every node's satisfaction problem for the SDN was resolved, and the mess phenomenon was avoided. Finally, the rationality and effectiveness of the proposed model was verified using a numerical example.
Closedness of orbits in a space with SU(2) Poisson structure
NASA Astrophysics Data System (ADS)
Fatollahi, Amir H.; Shariati, Ahmad; Khorrami, Mohammad
2014-06-01
The closedness of orbits of central forces is addressed in a three-dimensional space in which the Poisson bracket among the coordinates is that of the SU(2) Lie algebra. In particular it is shown that among problems with spherically symmetric potential energies, it is only the Kepler problem for which all bounded orbits are closed. In analogy with the case of the ordinary space, a conserved vector (apart from the angular momentum) is explicitly constructed, which is responsible for the orbits being closed. This is the analog of the Laplace-Runge-Lenz vector. The algebra of the constants of the motion is also worked out.
Applications of Support Vector Machines In Chemo And Bioinformatics
NASA Astrophysics Data System (ADS)
Jayaraman, V. K.; Sundararajan, V.
2010-10-01
Conventional linear & nonlinear tools for classification, regression & data driven modeling are being replaced on a rapid scale by newer techniques & tools based on artificial intelligence and machine learning. While the linear techniques are not applicable for inherently nonlinear problems, newer methods serve as attractive alternatives for solving real life problems. Support Vector Machine (SVM) classifiers are a set of universal feed-forward network based classification algorithms that have been formulated from statistical learning theory and structural risk minimization principle. SVM regression closely follows the classification methodology. In this work recent applications of SVM in Chemo & Bioinformatics will be described with suitable illustrative examples.
A SVM framework for fault detection of the braking system in a high speed train
NASA Astrophysics Data System (ADS)
Liu, Jie; Li, Yan-Fu; Zio, Enrico
2017-03-01
In April 2015, the number of operating High Speed Trains (HSTs) in the world has reached 3603. An efficient, effective and very reliable braking system is evidently very critical for trains running at a speed around 300 km/h. Failure of a highly reliable braking system is a rare event and, consequently, informative recorded data on fault conditions are scarce. This renders the fault detection problem a classification problem with highly unbalanced data. In this paper, a Support Vector Machine (SVM) framework, including feature selection, feature vector selection, model construction and decision boundary optimization, is proposed for tackling this problem. Feature vector selection can largely reduce the data size and, thus, the computational burden. The constructed model is a modified version of the least square SVM, in which a higher cost is assigned to the error of classification of faulty conditions than the error of classification of normal conditions. The proposed framework is successfully validated on a number of public unbalanced datasets. Then, it is applied for the fault detection of braking systems in HST: in comparison with several SVM approaches for unbalanced datasets, the proposed framework gives better results.
Transfers between libration-point orbits in the elliptic restricted problem
NASA Astrophysics Data System (ADS)
Hiday-Johnston, L. A.; Howell, K. C.
1994-04-01
A strategy is formulated to design optimal time-fixed impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L1 libration point of the Sun-Earth/Moon barycenter system. The adjoint equation in terms of rotating coordinates in the elliptic restricted three-body problem is shown to be of a distinctly different form from that obtained in the analysis of trajectories in the two-body problem. Also, the necessary conditions for a time-fixed two-impulse transfer to be optimal are stated in terms of the primer vector. Primer vector theory is then extended to nonoptimal impulsive trajectories in order to establish a criterion whereby the addition of an interior impulse reduces total fuel expenditure. The necessary conditions for the local optimality of a transfer containing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. Determination of location, orientation, and magnitude of each additional impulse is accomplished by the unconstrained minimization of the cost function using a multivariable search method. Results indicate that substantial savings in fuel can be achieved by the addition of interior impulsive maneuvers on transfers between libration-point orbits.
Decentralized Dimensionality Reduction for Distributed Tensor Data Across Sensor Networks.
Liang, Junli; Yu, Guoyang; Chen, Badong; Zhao, Minghua
2016-11-01
This paper develops a novel decentralized dimensionality reduction algorithm for the distributed tensor data across sensor networks. The main contributions of this paper are as follows. First, conventional centralized methods, which utilize entire data to simultaneously determine all the vectors of the projection matrix along each tensor mode, are not suitable for the network environment. Here, we relax the simultaneous processing manner into the one-vector-by-one-vector (OVBOV) manner, i.e., determining the projection vectors (PVs) related to each tensor mode one by one. Second, we prove that in the OVBOV manner each PV can be determined without modifying any tensor data, which simplifies corresponding computations. Third, we cast the decentralized PV determination problem as a set of subproblems with consensus constraints, so that it can be solved in the network environment only by local computations and information communications among neighboring nodes. Fourth, we introduce the null space and transform the PV determination problem with complex orthogonality constraints into an equivalent hidden convex one without any orthogonality constraint, which can be solved by the Lagrange multiplier method. Finally, experimental results are given to show that the proposed algorithm is an effective dimensionality reduction scheme for the distributed tensor data across the sensor networks.
Qualitative investigation into students' use of divergence and curl in electromagnetism
NASA Astrophysics Data System (ADS)
Bollen, Laurens; van Kampen, Paul; Baily, Charles; De Cock, Mieke
2016-12-01
Many students struggle with the use of mathematics in physics courses. Although typically well trained in rote mathematical calculation, they often lack the ability to apply their acquired skills to physical contexts. Such student difficulties are particularly apparent in undergraduate electrodynamics, which relies heavily on the use of vector calculus. To gain insight into student reasoning when solving problems involving divergence and curl, we conducted eight semistructured individual student interviews. During these interviews, students discussed the divergence and curl of electromagnetic fields using graphical representations, mathematical calculations, and the differential form of Maxwell's equations. We observed that while many students attempt to clarify the problem by making a sketch of the electromagnetic field, they struggle to interpret graphical representations of vector fields in terms of divergence and curl. In addition, some students confuse the characteristics of field line diagrams and field vector plots. By interpreting our results within the conceptual blending framework, we show how a lack of conceptual understanding of the vector operators and difficulties with graphical representations can account for an improper understanding of Maxwell's equations in differential form. Consequently, specific learning materials based on a multiple representation approach are required to clarify Maxwell's equations.
Co-Labeling for Multi-View Weakly Labeled Learning.
Xu, Xinxing; Li, Wen; Xu, Dong; Tsang, Ivor W
2016-06-01
It is often expensive and time consuming to collect labeled training samples in many real-world applications. To reduce human effort on annotating training samples, many machine learning techniques (e.g., semi-supervised learning (SSL), multi-instance learning (MIL), etc.) have been studied to exploit weakly labeled training samples. Meanwhile, when the training data is represented with multiple types of features, many multi-view learning methods have shown that classifiers trained on different views can help each other to better utilize the unlabeled training samples for the SSL task. In this paper, we study a new learning problem called multi-view weakly labeled learning, in which we aim to develop a unified approach to learn robust classifiers by effectively utilizing different types of weakly labeled multi-view data from a broad range of tasks including SSL, MIL and relative outlier detection (ROD). We propose an effective approach called co-labeling to solve the multi-view weakly labeled learning problem. Specifically, we model the learning problem on each view as a weakly labeled learning problem, which aims to learn an optimal classifier from a set of pseudo-label vectors generated by using the classifiers trained from other views. Unlike traditional co-training approaches using a single pseudo-label vector for training each classifier, our co-labeling approach explores different strategies to utilize the predictions from different views, biases and iterations for generating the pseudo-label vectors, making our approach more robust for real-world applications. Moreover, to further improve the weakly labeled learning on each view, we also exploit the inherent group structure in the pseudo-label vectors generated from different strategies, which leads to a new multi-layer multiple kernel learning problem. Promising results for text-based image retrieval on the NUS-WIDE dataset as well as news classification and text categorization on several real-world multi-view datasets clearly demonstrate that our proposed co-labeling approach achieves state-of-the-art performance for various multi-view weakly labeled learning problems including multi-view SSL, multi-view MIL and multi-view ROD.
Riemann–Hilbert problem approach for two-dimensional flow inverse scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow
2014-10-15
We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.
Orthogonal vector algorithm to obtain the solar vector using the single-scattering Rayleigh model.
Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Shi, Chao
2018-02-01
Information obtained from a polarization pattern in the sky provides many animals like insects and birds with vital long-distance navigation cues. The solar vector can be derived from the polarization pattern using the single-scattering Rayleigh model. In this paper, an orthogonal vector algorithm, which utilizes the redundancy of the single-scattering Rayleigh model, is proposed. We use the intersection angles between the polarization vectors as the main criteria in our algorithm. The assumption that all polarization vectors can be considered coplanar is used to simplify the three-dimensional (3D) problem with respect to the polarization vectors in our simulation. The surface-normal vector of the plane, which is determined by the polarization vectors after translation, represents the solar vector. Unfortunately, the two-directionality of the polarization vectors makes the resulting solar vector ambiguous. One important result of this study is, however, that this apparent disadvantage has no effect on the complexity of the algorithm. Furthermore, two other universal least-squares algorithms were investigated and compared. A device was then constructed, which consists of five polarized-light sensors as well as a 3D attitude sensor. Both the simulation and experimental data indicate that the orthogonal vector algorithms, if used with a suitable threshold, perform equally well or better than the other two algorithms. Our experimental data reveal that if the intersection angles between the polarization vectors are close to 90°, the solar-vector angle deviations are small. The data also support the assumption of coplanarity. During the 51 min experiment, the mean of the measured solar-vector angle deviations was about 0.242°, as predicted by our theoretical model.
[Anti-influenza vaccination in animals].
Bublot, M
2009-01-01
Until recently, Influenza was considered as a veterinary problem in avian, swine and horse only. New influenza strains able to infect and cause a disease in dogs and cats emerged these last six years. The most widely used influenza veterinary vaccines are the inactivated adjuvanted vaccines which are based on whole or split virus. New technologies have allowed the development of new generation vaccines including modified-live and vector vaccines. Modified-live influenza vaccines are available for horses only but they are in development in other species. Vector vaccines are already in use in chickens (replicative fowlpox vector) and in horses (non-replicative canarypox vector). These vaccines induce a rapid cellular and humoral immunity. Experimental studies have also shown that these vector vaccines are protective in other domestic species. These vector vaccines are compatible with the "DIVA" strategy which consists in differentiating infected from vaccinated animals and which allows disease eradication. The successive use of vector and inactivated vaccines (heterologous "prime-boost") induces a superior protective immunity in domestic poultry and constitutes a promising strategy for the control of H5N1 infection.
Fast metabolite identification with Input Output Kernel Regression.
Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho
2016-06-15
An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. celine.brouard@aalto.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Fast metabolite identification with Input Output Kernel Regression
Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho
2016-01-01
Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628
Nonlinear pulse shaping and polarization dynamics in mode-locked fiber lasers
NASA Astrophysics Data System (ADS)
Boscolo, Sonia; Sergeyev, Sergey V.; Mou, Chengbo; Tsatourian, Veronika; Turitsyn, Sergei; Finot, Christophe; Mikhailov, Vitaly; Rabin, Bryan; Westbrook, Paul S.
2014-03-01
We review our recent progress on the study of new nonlinear mechanisms of pulse shaping in passively mode-locked fiber lasers. These include a mode-locking regime featuring pulses with a triangular distribution of the intensity, and spectral compression arising from nonlinear pulse propagation. We also report on our recent experimental studies unveiling new types of vector solitons with processing states of polarization for multi-pulse and tightly bound-state soliton (soliton molecule) operations in a carbon nanotube (CNT) mode-locked fiber laser with anomalous dispersion cavity.
MISTY CASTLE Series. MILL RACE Event. Sanitized.1
1981-12-18
Answer Yes or No) _o b. If 13a is Yes, is the use of the materials governed by NASC procedures? C. If 13b is Yes, the quantity of meterial is. In...pitch is defined as the angle between a plane normal to the 510c 5O,?/5O0k gravity vector and a line through the longitudinal axis of the aircraft...strength which is greater than that resistance which arises from the combined effects of friction and gravity . It would be very conse;vative to assume
On generalized Volterra systems
NASA Astrophysics Data System (ADS)
Charalambides, S. A.; Damianou, P. A.; Evripidou, C. A.
2015-01-01
We construct a large family of evidently integrable Hamiltonian systems which are generalizations of the KM system. The algorithm uses the root system of a complex simple Lie algebra. The Hamiltonian vector field is homogeneous cubic but in a number of cases a simple change of variables transforms such a system to a quadratic Lotka-Volterra system. We present in detail all such systems in the cases of A3, A4 and we also give some examples from higher dimensions. We classify all possible Lotka-Volterra systems that arise via this algorithm in the An case.
Robust support vector regression networks for function approximation with outliers.
Chuang, Chen-Chia; Su, Shun-Feng; Jeng, Jin-Tsong; Hsiao, Chih-Ching
2002-01-01
Support vector regression (SVR) employs the support vector machine (SVM) to tackle problems of function approximation and regression estimation. SVR has been shown to have good robust properties against noise. When the parameters used in SVR are improperly selected, overfitting phenomena may still occur. However, the selection of various parameters is not straightforward. Besides, in SVR, outliers may also possibly be taken as support vectors. Such an inclusion of outliers in support vectors may lead to seriously overfitting phenomena. In this paper, a novel regression approach, termed as the robust support vector regression (RSVR) network, is proposed to enhance the robust capability of SVR. In the approach, traditional robust learning approaches are employed to improve the learning performance for any selected parameters. From the simulation results, our RSVR can always improve the performance of the learned systems for all cases. Besides, it can be found that even the training lasted for a long period, the testing errors would not go up. In other words, the overfitting phenomenon is indeed suppressed.
Initialization of Formation Flying Using Primer Vector Theory
NASA Technical Reports Server (NTRS)
Mailhe, Laurie; Schiff, Conrad; Folta, David
2002-01-01
In this paper, we extend primer vector analysis to formation flying. Optimization of the classical rendezvous or free-time transfer problem between two orbits using primer vector theory has been extensively studied for one spacecraft. However, an increasing number of missions are now considering flying a set of spacecraft in close formation. Missions such as the Magnetospheric MultiScale (MMS) and Leonardo-BRDF (Bidirectional Reflectance Distribution Function) need to determine strategies to transfer each spacecraft from the common launch orbit to their respective operational orbit. In addition, all the spacecraft must synchronize their states so that they achieve the same desired formation geometry over each orbit. This periodicity requirement imposes constraints on the boundary conditions that can be used for the primer vector algorithm. In this work we explore the impact of the periodicity requirement in optimizing each spacecraft transfer trajectory using primer vector theory. We first present our adaptation of primer vector theory to formation flying. Using this method, we then compute the AV budget for each spacecraft subject to different formation endpoint constraints.
Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.
Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi
2013-01-01
The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.
Progress in developing cationic vectors for non-viral systemic gene therapy against cancer.
Morille, Marie; Passirani, Catherine; Vonarbourg, Arnaud; Clavreul, Anne; Benoit, Jean-Pierre
2008-01-01
Initially, gene therapy was viewed as an approach for treating hereditary diseases, but its potential role in the treatment of acquired diseases such as cancer is now widely recognized. The understanding of the molecular mechanisms involved in cancer and the development of nucleic acid delivery systems are two concepts that have led to this development. Systemic gene delivery systems are needed for therapeutic application to cells inaccessible by percutaneous injection and for multi-located tumor sites, i.e. metastases. Non-viral vectors based on the use of cationic lipids or polymers appear to have promising potential, given the problems of safety encountered with viral vectors. Using these non-viral vectors, the current challenge is to obtain a similarly effective transfection to viral ones. Based on the advantages and disadvantages of existing vectors and on the hurdles encountered with these carriers, the aim of this review is to describe the "perfect vector" for systemic gene therapy against cancer.
Currency crisis indication by using ensembles of support vector machine classifiers
NASA Astrophysics Data System (ADS)
Ramli, Nor Azuana; Ismail, Mohd Tahir; Wooi, Hooy Chee
2014-07-01
There are many methods that had been experimented in the analysis of currency crisis. However, not all methods could provide accurate indications. This paper introduces an ensemble of classifiers by using Support Vector Machine that's never been applied in analyses involving currency crisis before with the aim of increasing the indication accuracy. The proposed ensemble classifiers' performances are measured using percentage of accuracy, root mean squared error (RMSE), area under the Receiver Operating Characteristics (ROC) curve and Type II error. The performances of an ensemble of Support Vector Machine classifiers are compared with the single Support Vector Machine classifier and both of classifiers are tested on the data set from 27 countries with 12 macroeconomic indicators for each country. From our analyses, the results show that the ensemble of Support Vector Machine classifiers outperforms single Support Vector Machine classifier on the problem involving indicating a currency crisis in terms of a range of standard measures for comparing the performance of classifiers.
Urbanization, land tenure security and vector-borne Chagas disease.
Levy, Michael Z; Barbu, Corentin M; Castillo-Neyra, Ricardo; Quispe-Machaca, Victor R; Ancca-Juarez, Jenny; Escalante-Mejia, Patricia; Borrini-Mayori, Katty; Niemierko, Malwina; Mabud, Tarub S; Behrman, Jere R; Naquira-Velarde, Cesar
2014-08-22
Modern cities represent one of the fastest growing ecosystems on the planet. Urbanization occurs in stages; each stage characterized by a distinct habitat that may be more or less susceptible to the establishment of disease vector populations and the transmission of vector-borne pathogens. We performed longitudinal entomological and epidemiological surveys in households along a 1900 × 125 m transect of Arequipa, Peru, a major city of nearly one million inhabitants, in which the transmission of Trypanosoma cruzi, the aetiological agent of Chagas disease, by the insect vector Triatoma infestans, is an ongoing problem. The transect spans a cline of urban development from established communities to land invasions. We find that the vector is tracking the development of the city, and the parasite, in turn, is tracking the dispersal of the vector. New urbanizations are free of vector infestation for decades. T. cruzi transmission is very recent and concentrated in more established communities. The increase in land tenure security during the course of urbanization, if not accompanied by reasonable and enforceable zoning codes, initiates an influx of construction materials, people and animals that creates fertile conditions for epidemics of some vector-borne diseases. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
The prospect of gene therapy for prostate cancer: update on theory and status.
Koeneman, K S; Hsieh, J T
2001-09-01
Molecularly based novel therapeutic agents are needed to address the problem of locally recurrent, or metastatic, advanced hormone-refractory prostate cancer. Recent basic science advances in mechanisms of gene expression, vector delivery, and targeting have rendered clinically relevant gene therapy to the prostatic fossa and distant sites feasible in the near future. Current research and clinical investigative efforts involving methods for more effective vector delivery and targeting, with enhanced gene expression to selected (specific) sites, are reviewed. These areas of research involve tissue-specific promoters, transgene exploration, vector design and delivery, and selective vector targeting. The 'vectorology' involved mainly addresses selective tissue homing with ligands, mechanisms of innate immune system evasion for durable transgene expression, and the possibility of repeat administration.
Layeni, Olawanle P; Akinola, Adegbola P; Johnson, Jesse V
2016-01-01
Two distinct and novel formalisms for deriving exact closed solutions of a class of variable-coefficient differential-difference equations arising from a plate solidification problem are introduced. Thereupon, exact closed traveling wave and similarity solutions to the plate solidification problem are obtained for some special cases of time-varying plate surface temperature.
POEMS in Newton's Aerodynamic Frustum
ERIC Educational Resources Information Center
Sampedro, Jaime Cruz; Tetlalmatzi-Montiel, Margarita
2010-01-01
The golden mean is often naively seen as a sign of optimal beauty but rarely does it arise as the solution of a true optimization problem. In this article we present such a problem, demonstrating a close relationship between the golden mean and a special case of Newton's aerodynamical problem for the frustum of a cone. Then, we exhibit a parallel…
ERIC Educational Resources Information Center
Hegde, Balasubrahmanya; Meera, B. N.
2012-01-01
A perceived difficulty is associated with physics problem solving from a learner's viewpoint, arising out of a multitude of reasons. In this paper, we have examined the microstructure of students' thought processes during physics problem solving by combining the analysis of responses to multiple-choice questions and semistructured student…
NASA Technical Reports Server (NTRS)
Shea, T. G.
1974-01-01
Disinfection and corrosion control in the water systems of the Saturn 5 Orbital Workshop Program are considered. Within this framework, the problem areas of concern are classified into four general areas: disinfection; corrosion; membrane-associated problems of disinfectant uptake and diffusion; and taste and odor problems arising from membrane-disinfectant interaction.
Psychotherapy with Older Dying Persons.
ERIC Educational Resources Information Center
Dye, Carol J.
Psychotherapy with older dying patients can lead to problems of countertransference for the clinician. Working with dying patients requires flexibility to adapt basic therapeutics to the institutional setting. Goals of psychotherapy must be reconceptualized for dying clients. The problems of countertransference arise because clinicians themselves…
ERIC Educational Resources Information Center
Hendrickson, Homer
1988-01-01
Spelling problems arise due to problems with form discrimination and inadequate visualization. A child's sequence of visual development involves learning motor control and coordination, with vision directing and monitoring the movements; learning visual comparison of size, shape, directionality, and solidity; developing visual memory or recall;…
ERIC Educational Resources Information Center
Kustusch, Mary Bridget
2016-01-01
Students in introductory physics struggle with vector algebra and these challenges are often associated with contextual and representational features of the problems. Performance on problems about cross product direction is particularly poor and some research suggests that this may be primarily due to misapplied right-hand rules. However, few…
NASA Astrophysics Data System (ADS)
Kustusch, Mary Bridget
2011-12-01
Students in introductory physics struggle with vector algebra and with cross product direction in particular. Some have suggested that this may be due to misapplied right-hand rules, but there are few studies that have had the resolution to explore this. Additionally, previous research on student understanding has noted several kinds of representation-dependence of student performance with vector algebra in both physics and non-physics (or math) contexts (e.g. Hawkins et al., 2009; Van Deventer, 2008). Yet with few exceptions (e.g. Scaife and Heckler, 2010), these findings have not been applied to cross product direction questions or the use of right-hand rules. Also, the extensive work in spatial cognition is particularly applicable to cross product direction due to the spatial and kinesthetic nature of the right-hand rule. A synthesis of the literature from these various fields reveals four categories of problem features likely to impact the understanding of cross product direction: (1) the type of reasoning required, (2) the orientation of the vectors, (3) the need for parallel transport, and (4) the physics context and features (or lack thereof). These features formed the basis of the present effort to systematically explore the context-dependence and representation- dependence of student performance on cross product direction questions. This study used a mix of qualitative and quantitative techniques to analyze twenty-seven individual think-aloud interviews. During these interviews, second semester introductory physics students answered 80-100 cross product direction questions in different contexts and with varying problem features. These features were then used as the predictors in regression analyses for correctness and response time. In addition, each problem was coded for the methods used and the errors made to gain a deeper understanding of student behavior and the impact of these features. The results revealed a wide variety of methods (including six different right-hand rules), many different types of errors, and significant context-dependence and representation-dependence for the features mentioned above. Problems that required reasoning backward to find A⃗ (for C⃗=A⃗ xB⃗ ) presented the biggest challenge for students. Participants who recognized the non-commutativity of the cross product would often reverse the order ( B⃗xA⃗ ) on these problems. Also, this error occurred less frequently when a Guess and Check method was used in addition to the right-hand rule. Three different aspects of orientation had a significant impact on performance: (1) the physical discomfort of using a right-hand rule, (2) the plane of the given vectors, and to a lesser extent, (3) the angle between the vectors. One participant was more likely to switch the order of the vectors for the physically awkward orientations than for the physically easy orientations; and there was evidence that some of the difficulty with vector orientations that were not in the xy-plane was due to misinterpretations of the into and out of the page symbols. The impact of both physical discomfort and the plane of the vectors was reduced when participants rotated the paper. Unlike other problem features, the issue of parallel transport did not appear to be nearly as prevalent for cross product direction as it is for vector addition and subtraction. In addition to these findings, this study confirmed earlier findings regarding physics difficulties with magnetic field and magnetic force, such as differences in performance based on the representation of magnetic field (Scaife and Heckler, 2010) and confusion between electric and magnetic fields (Maloney et al., 2001). It also provided evidence of physics difficulties with magnetic field and magnetic force that have been suspected but never explored, specifically the impact of the sign of the charge and the observation location. This study demonstrated that student difficulty with cross product direction is not as simple as misapplied right-hand rules, although this is an issue. Student behavior on cross product direction questions is significantly dependent on both the context of the question and the representation of various problem features. Although more research is necessary, particularly in regard to individual differences, this study represents a significant step forward in our understanding of student difficulties with cross product direction.
Inequalities, assessment and computer algebra
NASA Astrophysics Data System (ADS)
Sangwin, Christopher J.
2015-01-01
The goal of this paper is to examine single variable real inequalities that arise as tutorial problems and to examine the extent to which current computer algebra systems (CAS) can (1) automatically solve such problems and (2) determine whether students' own answers to such problems are correct. We review how inequalities arise in contemporary curricula. We consider the formal mathematical processes by which such inequalities are solved, and we consider the notation and syntax through which solutions are expressed. We review the extent to which current CAS can accurately solve these inequalities, and the form given to the solutions by the designers of this software. Finally, we discuss the functionality needed to deal with students' answers, i.e. to establish equivalence (or otherwise) of expressions representing unions of intervals. We find that while contemporary CAS accurately solve inequalities there is a wide variety of notation used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Gregory H.
2003-08-06
In this paper we present a general iterative method for the solution of the Riemann problem for hyperbolic systems of PDEs. The method is based on the multiple shooting method for free boundary value problems. We demonstrate the method by solving one-dimensional Riemann problems for hyperelastic solid mechanics. Even for conditions representative of routine laboratory conditions and military ballistics, dramatic differences are seen between the exact and approximate Riemann solution. The greatest discrepancy arises from misallocation of energy between compressional and thermal modes by the approximate solver, resulting in nonphysical entropy and temperature estimates. Several pathological conditions arise in commonmore » practice, and modifications to the method to handle these are discussed. These include points where genuine nonlinearity is lost, degeneracies, and eigenvector deficiencies that occur upon melting.« less
Complexity seems to open a way towards a new Aristotelian-Thomistic ontology.
Strumia, Alberto
2007-01-01
Today's sciences seem to converge all towards very similar foundational questions. Such claims, both of epistemological and ontological nature, seem to rediscover, in a new fashion some of the most relevant topics of ancient Greek and Mediaeval philosophy of nature, logic and metaphysics, such as the problem of the relationship between the whole and its parts (non redictionism), the problems of the paradoxes arising from the attempt to conceive the entity like an univocal concept (analogy and analogia entis), the problem of the mind-body relationship and that of an adequate cognitive theory (abstraction and immaterial nature of the mind), the complexity of some physical, chemical and biological systems and global properties arising from information (matter-form theory), etc. Medicine too is involved in some of such relevant questions and cannot avoid to take them into a special account.
A multidomain spectral collocation method for the Stokes problem
NASA Technical Reports Server (NTRS)
Landriani, G. Sacchi; Vandeven, H.
1989-01-01
A multidomain spectral collocation scheme is proposed for the approximation of the two-dimensional Stokes problem. It is shown that the discrete velocity vector field is exactly divergence-free and we prove error estimates both for the velocity and the pressure.
Solving large sparse eigenvalue problems on supercomputers
NASA Technical Reports Server (NTRS)
Philippe, Bernard; Saad, Youcef
1988-01-01
An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.
Spatial data analytics on heterogeneous multi- and many-core parallel architectures using python
Laura, Jason R.; Rey, Sergio J.
2017-01-01
Parallel vector spatial analysis concerns the application of parallel computational methods to facilitate vector-based spatial analysis. The history of parallel computation in spatial analysis is reviewed, and this work is placed into the broader context of high-performance computing (HPC) and parallelization research. The rise of cyber infrastructure and its manifestation in spatial analysis as CyberGIScience is seen as a main driver of renewed interest in parallel computation in the spatial sciences. Key problems in spatial analysis that have been the focus of parallel computing are covered. Chief among these are spatial optimization problems, computational geometric problems including polygonization and spatial contiguity detection, the use of Monte Carlo Markov chain simulation in spatial statistics, and parallel implementations of spatial econometric methods. Future directions for research on parallelization in computational spatial analysis are outlined.
Support Vector Machines for Hyperspectral Remote Sensing Classification
NASA Technical Reports Server (NTRS)
Gualtieri, J. Anthony; Cromp, R. F.
1998-01-01
The Support Vector Machine provides a new way to design classification algorithms which learn from examples (supervised learning) and generalize when applied to new data. We demonstrate its success on a difficult classification problem from hyperspectral remote sensing, where we obtain performances of 96%, and 87% correct for a 4 class problem, and a 16 class problem respectively. These results are somewhat better than other recent results on the same data. A key feature of this classifier is its ability to use high-dimensional data without the usual recourse to a feature selection step to reduce the dimensionality of the data. For this application, this is important, as hyperspectral data consists of several hundred contiguous spectral channels for each exemplar. We provide an introduction to this new approach, and demonstrate its application to classification of an agriculture scene.
Chimpanzee adenoviral vectors as vaccines for outbreak pathogens
2017-01-01
ABSTRACT The 2014–15 Ebola outbreak in West Africa highlighted the potential for large disease outbreaks caused by emerging pathogens and has generated considerable focus on preparedness for future epidemics. Here we discuss drivers, strategies and practical considerations for developing vaccines against outbreak pathogens. Chimpanzee adenoviral (ChAd) vectors have been developed as vaccine candidates for multiple infectious diseases and prostate cancer. ChAd vectors are safe and induce antigen-specific cellular and humoral immunity in all age groups, as well as circumventing the problem of pre-existing immunity encountered with human Ad vectors. For these reasons, such viral vectors provide an attractive platform for stockpiling vaccines for emergency deployment in response to a threatened outbreak of an emerging pathogen. Work is already underway to develop vaccines against a number of other outbreak pathogens and we will also review progress on these approaches here, particularly for Lassa fever, Nipah and MERS. PMID:29083948
Chemical ecology of animal and human pathogen vectors in a changing global climate.
Pickett, John A; Birkett, Michael A; Dewhirst, Sarah Y; Logan, James G; Omolo, Maurice O; Torto, Baldwyn; Pelletier, Julien; Syed, Zainulabeuddin; Leal, Walter S
2010-01-01
Infectious diseases affecting livestock and human health that involve vector-borne pathogens are a global problem, unrestricted by borders or boundaries, which may be exacerbated by changing global climate. Thus, the availability of effective tools for control of pathogen vectors is of the utmost importance. The aim of this article is to review, selectively, current knowledge of the chemical ecology of pathogen vectors that affect livestock and human health in the developed and developing world, based on key note lectures presented in a symposium on "The Chemical Ecology of Disease Vectors" at the 25th Annual ISCE meeting in Neuchatel, Switzerland. The focus is on the deployment of semiochemicals for monitoring and control strategies, and discusses briefly future directions that such research should proceed along, bearing in mind the environmental challenges associated with climate change that we will face during the 21st century.
Ghosh, Srikant; Nagar, Gaurav
2014-12-01
Ticks, as vectors of several zoonotic diseases, are ranked second only to mosquitoes as vectors. The diseases spread by ticks are a major constraint to animal productivity while causing morbidity and mortality in both animals and humans. A number of tick species have been recognised since long as vectors of lethal pathogens, viz. Crimean-Congo haemorrhagic fever virus (CCHFV), Kyasanur forest disease virus (KFDV), Babesia spp, Theileria, Rickettsia conorii, Anaplasma marginale, etc. and the damages caused by them are well-recognised. There is a need to reassess the renewed threat posed by the tick vectors and to prioritize the tick control research programme. This review is focused on the major tick-borne human and animal diseases in India and the progress in vector control research with emphasis on acaricide resistance, tick vaccine and the development of potential phytoacaricides as an integral part of integrated tick control programme.
Current status of Plasmodium knowlesi vectors: a public health concern?
Vythilingam, I; Wong, M L; Wan-Yussof, W S
2018-01-01
Plasmodium knowlesi a simian malaria parasite is currently affecting humans in Southeast Asia. Malaysia has reported the most number of cases and P. knowlesi is the predominant species occurring in humans. The vectors of P. knowlesi belong to the Leucosphyrus group of Anopheles mosquitoes. These are generally described as forest-dwelling mosquitoes. With deforestation and changes in land-use, some species have become predominant in farms and villages. However, knowledge on the distribution of these vectors in the country is sparse. From a public health point of view it is important to know the vectors, so that risk factors towards knowlesi malaria can be identified and control measures instituted where possible. Here, we review what is known about the knowlesi malaria vectors and ascertain the gaps in knowledge, so that future studies could concentrate on this paucity of data in-order to address this zoonotic problem.
Evolution of Lamb Vector as a Vortex Breaking into Turbulence.
NASA Astrophysics Data System (ADS)
Wu, J. Z.; Lu, X. Y.
1996-11-01
In an incompressible flow, either laminar or turbulent, the Lamb vector is solely responsible to nonlinear interactions. While its longitudinal part is balanced by stagnation enthalpy, its transverse part is the unique source (as an external forcing in spectral space) that causes the flow to evolve. Moreover, in Reynolds-averaged flows the turbulent force can be derived exclusively from the Lamb vector instead of the full Reynolds stress tensor. Therefore, studying the evolution of the Lamb vector itself (both longitudinal and transverse parts) is of great interest. We have numerically examined this problem, taking the nonlinear distabilization of a viscous vortex as an example. In the later stage of this evolution we introduced a forcing to keep a statistically steady state, and observed the Lamb vector behavior in the resulting fine turbulence. The result is presented in both physical and spectral spaces.
Yang, V W; Marks, J A; Davis, B P; Jeffries, T W
1994-01-01
This paper describes the first high-efficiency transformation system for the xylose-fermenting yeast Pichia stipitis. The system includes integrating and autonomously replicating plasmids based on the gene for orotidine-5'-phosphate decarboxylase (URA3) and an autonomous replicating sequence (ARS) element (ARS2) isolated from P. stipitis CBS 6054. Ura- auxotrophs were obtained by selecting for resistance to 5-fluoroorotic acid and were identified as ura3 mutants by transformation with P. stipitis URA3. P. stipitis URA3 was cloned by its homology to Saccharomyces cerevisiae URA3, with which it is 69% identical in the coding region. P. stipitis ARS elements were cloned functionally through plasmid rescue. These sequences confer autonomous replication when cloned into vectors bearing the P. stipitis URA3 gene. P. stipitis ARS2 has features similar to those of the consensus ARS of S. cerevisiae and other ARS elements. Circular plasmids bearing the P. stipitis URA3 gene with various amounts of flanking sequences produced 600 to 8,600 Ura+ transformants per micrograms of DNA by electroporation. Most transformants obtained with circular vectors arose without integration of vector sequences. One vector yielded 5,200 to 12,500 Ura+ transformants per micrograms of DNA after it was linearized at various restriction enzyme sites within the P. stipitis URA3 insert. Transformants arising from linearized vectors produced stable integrants, and integration events were site specific for the genomic ura3 in 20% of the transformants examined. Plasmids bearing the P. stipitis URA3 gene and ARS2 element produced more than 30,000 transformants per micrograms of plasmid DNA. Autonomously replicating plasmids were stable for at least 50 generations in selection medium and were present at an average of 10 copies per nucleus. Images PMID:7811063
Towards a resource-based habitat approach for spatial modelling of vector-borne disease risks.
Hartemink, Nienke; Vanwambeke, Sophie O; Purse, Bethan V; Gilbert, Marius; Van Dyck, Hans
2015-11-01
Given the veterinary and public health impact of vector-borne diseases, there is a clear need to assess the suitability of landscapes for the emergence and spread of these diseases. Current approaches for predicting disease risks neglect key features of the landscape as components of the functional habitat of vectors or hosts, and hence of the pathogen. Empirical-statistical methods do not explicitly incorporate biological mechanisms, whereas current mechanistic models are rarely spatially explicit; both methods ignore the way animals use the landscape (i.e. movement ecology). We argue that applying a functional concept for habitat, i.e. the resource-based habitat concept (RBHC), can solve these issues. The RBHC offers a framework to identify systematically the different ecological resources that are necessary for the completion of the transmission cycle and to relate these resources to (combinations of) landscape features and other environmental factors. The potential of the RBHC as a framework for identifying suitable habitats for vector-borne pathogens is explored and illustrated with the case of bluetongue virus, a midge-transmitted virus affecting ruminants. The concept facilitates the study of functional habitats of the interacting species (vectors as well as hosts) and provides new insight into spatial and temporal variation in transmission opportunities and exposure that ultimately determine disease risks. It may help to identify knowledge gaps and control options arising from changes in the spatial configuration of key resources across the landscape. The RBHC framework may act as a bridge between existing mechanistic and statistical modelling approaches. © 2014 The Authors. Biological Reviews published by John Wiley & Sons Ltd on behalf of Cambridge Philosophical Society.
Hydrologic variability and the dynamics of West Nile virus transmission
NASA Astrophysics Data System (ADS)
Shaman, J. L.
2011-12-01
West Nile virus (WNV) first emerged in North America in New York City during 1999 and since that time has spread throughout the continent and settled into a pattern of local endemicity in which outbreaks of variable size develop in some years but not others. Predicting where and when these outbreaks will develop is an issue of considerable public health importance. Spillover transmission of WNV to humans typically occurs when infection rates among vector mosquitoes are elevated. Mosquito infection rates are not constant through time but instead increase when newly emergent mosquitoes can more readily acquire WNV by blood-meal feeding on available, infected animal hosts. Such an increase of vector mosquito infection rates is termed amplification and is facilitated for WNV by intense zoonotic transmission of the virus among vector mosquitoes and avian hosts. Theory, observation and model simulations indicate that amplification is favored when mosquito breeding habitats and bird nesting and roosting habitats overlap. Both vector mosquitoes and vertebrate hosts depend on water resources; mosquitoes are critically dependent on the availability of standing water, as the first 3 stages of the mosquito life cycle, egg, larvae, pupae, are aquatic. Here it is shown that hydrologic variability often determines where and when vector mosquitoes and avian hosts congregate together, and when the amplification of WNV is more likely. Measures of land surface wetness and pooling, from ground observation, satellite observation, or numerical modeling, can provide reliable estimates of where and when WNV transmission hotspots will arise. Examples of this linkage between hydrology and WNV activity are given for Florida, Colorado and New York, and an operational system for monitoring and forecasting WNV risk in space and time is presented for Florida.
Klamt, Steffen; Regensburger, Georg; Gerstl, Matthias P; Jungreuthmayer, Christian; Schuster, Stefan; Mahadevan, Radhakrishnan; Zanghellini, Jürgen; Müller, Stefan
2017-04-01
Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks.
Klamt, Steffen; Gerstl, Matthias P.; Jungreuthmayer, Christian; Mahadevan, Radhakrishnan; Müller, Stefan
2017-01-01
Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks. PMID:28406903
Primer Vector Optimization: Survey of Theory, New Analysis and Applications
NASA Technical Reports Server (NTRS)
Guzman, J. J.; Mailhe, L. M.; Schiff, C.; Hughes, S. P.; Folta, D. C.
2002-01-01
In this paper, a summary of primer vector theory is presented. The applicability of primer vector theory is examined in an effort to understand when and why the theory can fail. For example, since the Calculus of Variations is based on "small" variations, singularities in the linearized (variational) equations of motion along the arcs must be taken into account. These singularities are a recurring problem in analyse that employ small variations. Two examples, the initialization of an orbit and a line of apsides rotation, are presented. Recommendations, future work, and the possible addition of other optimization techniques are also discussed.
The bee's map of the e-vector pattern in the sky.
Rossel, S; Wehner, R
1982-07-01
It has long been known that bees can use the pattern of polarized light in the sky as a compass cue even if they can see only a small part of the whole pattern. How they solve this problem has remained enigmatic. Here we show that the bees rely on a generalized celestial map that is used invariably throughout the day. We reconstruct this map by analyzing the navigation errors made by bees to which single e-vectors are displayed. In addition, we demonstrate how the bee's celestial map can be derived from the e-vector patterns in the sky.
Ghost circles in lattice Aubry-Mather theory
NASA Astrophysics Data System (ADS)
Mramor, Blaz; Rink, Bob
Monotone lattice recurrence relations such as the Frenkel-Kontorova lattice, arise in Hamiltonian lattice mechanics, as models for ferromagnetism and as discretization of elliptic PDEs. Mathematically, they are a multi-dimensional counterpart of monotone twist maps. Such recurrence relations often admit a variational structure, so that the solutions x:Z→R are the stationary points of a formal action function W(x). Given any rotation vector ω∈R, classical Aubry-Mather theory establishes the existence of a large collection of solutions of ∇W(x)=0 of rotation vector ω. For irrational ω, this is the well-known Aubry-Mather set. It consists of global minimizers and it may have gaps. In this paper, we study the parabolic gradient flow {dx}/{dt}=-∇W(x) and we will prove that every Aubry-Mather set can be interpolated by a continuous gradient-flow invariant family, the so-called 'ghost circle'. The existence of these ghost circles is known in dimension d=1, for rational rotation vectors and Morse action functions. The main technical result of this paper is therefore a compactness theorem for lattice ghost circles, based on a parabolic Harnack inequality for the gradient flow. This implies the existence of lattice ghost circles of arbitrary rotation vectors and for arbitrary actions. As a consequence, we can give a simple proof of the fact that when an Aubry-Mather set has a gap, then this gap must be filled with minimizers, or contain a non-minimizing solution.
ERIC Educational Resources Information Center
Smith, Nick L.
2008-01-01
In contrast with nonindigenous workers, to what extent do unique ethical problems arise when indigenous field-workers participate in field studies? Three aspects of study design and operation are considered: data integrity issues, risk issues, and protection issues. Although many of the data quality issues that arise with the use of indigenous…
ERIC Educational Resources Information Center
Andersen, Erling B.
A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…
Multi Objective Controller Design for Linear System via Optimal Interpolation
NASA Technical Reports Server (NTRS)
Ozbay, Hitay
1996-01-01
We propose a methodology for the design of a controller which satisfies a set of closed-loop objectives simultaneously. The set of objectives consists of: (1) pole placement, (2) decoupled command tracking of step inputs at steady-state, and (3) minimization of step response transients with respect to envelope specifications. We first obtain a characterization of all controllers placing the closed-loop poles in a prescribed region of the complex plane. In this characterization, the free parameter matrix Q(s) is to be determined to attain objectives (2) and (3). Objective (2) is expressed as determining a Pareto optimal solution to a vector valued optimization problem. The solution of this problem is obtained by transforming it to a scalar convex optimization problem. This solution determines Q(O) and the remaining freedom in choosing Q(s) is used to satisfy objective (3). We write Q(s) = (l/v(s))bar-Q(s) for a prescribed polynomial v(s). Bar-Q(s) is a polynomial matrix which is arbitrary except that Q(O) and the order of bar-Q(s) are fixed. Obeying these constraints bar-Q(s) is now to be 'shaped' to minimize the step response characteristics of specific input/output pairs according to the maximum envelope violations. This problem is expressed as a vector valued optimization problem using the concept of Pareto optimality. We then investigate a scalar optimization problem associated with this vector valued problem and show that it is convex. The organization of the report is as follows. The next section includes some definitions and preliminary lemmas. We then give the problem statement which is followed by a section including a detailed development of the design procedure. We then consider an aircraft control example. The last section gives some concluding remarks. The Appendix includes the proofs of technical lemmas, printouts of computer programs, and figures.
Solving LP Relaxations of Large-Scale Precedence Constrained Problems
NASA Astrophysics Data System (ADS)
Bienstock, Daniel; Zuckerberg, Mark
We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.
NASA Aviation Safety Reporting System
NASA Technical Reports Server (NTRS)
1980-01-01
Problems in briefing of relief by air traffic controllers are discussed, including problems that arise when duty positions are changed by controllers. Altimeter reading and setting errors as factors in aviation safety are discussed, including problems associated with altitude-including instruments. A sample of reports from pilots and controllers is included, covering the topics of ATIS broadcasts an clearance readback problems. A selection of Alert Bulletins, with their responses, is included.
System balance analysis for vector computers
NASA Technical Reports Server (NTRS)
Knight, J. C.; Poole, W. G., Jr.; Voight, R. G.
1975-01-01
The availability of vector processors capable of sustaining computing rates of 10 to the 8th power arithmetic results pers second raised the question of whether peripheral storage devices representing current technology can keep such processors supplied with data. By examining the solution of a large banded linear system on these computers, it was found that even under ideal conditions, the processors will frequently be waiting for problem data.
Adaptive Estimation and Heuristic Optimization of Nonlinear Spacecraft Attitude Dynamics
2016-09-15
Algorithm GPS Global Positioning System HOUF Higher Order Unscented Filter IC initial conditions IMM Interacting Multiple Model IMU Inertial Measurement Unit ...sources ranging from inertial measurement units to star sensors are used to construct observations for attitude estimation algorithms. The sensor...parameters. A single vector measurement will provide two independent parameters, as a unit vector constraint removes a DOF making the problem underdetermined
The FKMM-invariant in low dimension
NASA Astrophysics Data System (ADS)
De Nittis, Giuseppe; Gomi, Kiyonori
2018-05-01
In this paper, we investigate the problem of the cohomological classification of "Quaternionic" vector bundles in low dimension (d≤slant 3). We show that there exists a characteristic class κ , called the FKMM-invariant, which takes value in the relative equivariant Borel cohomology and completely classifies "Quaternionic" vector bundles in low dimension. The main subject of the paper concerns a discussion about the surjectivity of κ.
NASA Astrophysics Data System (ADS)
Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.
2017-11-01
We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.
How Seductive Are Decorative Elements in Learning Materials?
ERIC Educational Resources Information Center
Rey, Gunter Daniel
2012-01-01
The seductive detail effect arises when people learn more deeply from a multimedia presentation when interesting but irrelevant adjuncts are excluded. However, previous studies about this effect are rather inconclusive and contained various methodical problems. The recent experiment attempted to overcome these methodical problems. Undergraduate…
Phantom Effects in Multilevel Compositional Analysis: Problems and Solutions
ERIC Educational Resources Information Center
Pokropek, Artur
2015-01-01
This article combines statistical and applied research perspective showing problems that might arise when measurement error in multilevel compositional effects analysis is ignored. This article focuses on data where independent variables are constructed measures. Simulation studies are conducted evaluating methods that could overcome the…
Using Grid Cells for Navigation.
Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil
2015-08-05
Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this "vector navigation" relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Greene, William H.
1989-01-01
A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.
Multi-robot task allocation based on two dimensional artificial fish swarm algorithm
NASA Astrophysics Data System (ADS)
Zheng, Taixiong; Li, Xueqin; Yang, Liangyi
2007-12-01
The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.
Lanczos eigensolution method for high-performance computers
NASA Technical Reports Server (NTRS)
Bostic, Susan W.
1991-01-01
The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.
Optimal ballistically captured Earth-Moon transfers
NASA Astrophysics Data System (ADS)
Ricord Griesemer, Paul; Ocampo, Cesar; Cooley, D. S.
2012-07-01
The optimality of a low-energy Earth-Moon transfer terminating in ballistic capture is examined for the first time using primer vector theory. An optimal control problem is formed with the following free variables: the location, time, and magnitude of the transfer insertion burn, and the transfer time. A constraint is placed on the initial state of the spacecraft to bind it to a given initial orbit around a first body, and on the final state of the spacecraft to limit its Keplerian energy with respect to a second body. Optimal transfers in the system are shown to meet certain conditions placed on the primer vector and its time derivative. A two point boundary value problem containing these necessary conditions is created for use in targeting optimal transfers. The two point boundary value problem is then applied to the ballistic lunar capture problem, and an optimal trajectory is shown. Additionally, the problem is then modified to fix the time of transfer, allowing for optimal multi-impulse transfers. The tradeoff between transfer time and fuel cost is shown for Earth-Moon ballistic lunar capture transfers.
A high-accuracy optical linear algebra processor for finite element applications
NASA Technical Reports Server (NTRS)
Casasent, D.; Taylor, B. K.
1984-01-01
Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.
Optimal Low Energy Earth-Moon Transfers
NASA Technical Reports Server (NTRS)
Griesemer, Paul Ricord; Ocampo, Cesar; Cooley, D. S.
2010-01-01
The optimality of a low-energy Earth-Moon transfer is examined for the first time using primer vector theory. An optimal control problem is formed with the following free variables: the location, time, and magnitude of the transfer insertion burn, and the transfer time. A constraint is placed on the initial state of the spacecraft to bind it to a given initial orbit around a first body, and on the final state of the spacecraft to limit its Keplerian energy with respect to a second body. Optimal transfers in the system are shown to meet certain conditions placed on the primer vector and its time derivative. A two point boundary value problem containing these necessary conditions is created for use in targeting optimal transfers. The two point boundary value problem is then applied to the ballistic lunar capture problem, and an optimal trajectory is shown. Additionally, the ballistic lunar capture trajectory is examined to determine whether one or more additional impulses may improve on the cost of the transfer.
NASA Astrophysics Data System (ADS)
Shorikov, A. F.
2016-12-01
In this article we consider a discrete-time dynamical system consisting of a set a controllable objects (region and forming it municipalities). The dynamics each of these is described by the corresponding linear or nonlinear discrete-time recurrent vector relations and its control system consist from two levels: basic level (control level I) that is dominating level and auxiliary level (control level II) that is subordinate level. Both levels have different criterions of functioning and united by information and control connections which defined in advance. In this article we study the problem of optimization of guaranteed result for program control by the final state of regional social and economic system in the presence of risks vectors. For this problem we propose a mathematical model in the form of two-level hierarchical minimax program control problem of the final states of this system with incomplete information and the general scheme for its solving.
NASA Astrophysics Data System (ADS)
Kanaun, S.; Markov, A.
2017-06-01
An efficient numerical method for solution of static problems of elasticity for an infinite homogeneous medium containing inhomogeneities (cracks and inclusions) is developed. Finite number of heterogeneous inclusions and planar parallel cracks of arbitrary shapes is considered. The problem is reduced to a system of surface integral equations for crack opening vectors and volume integral equations for stress tensors inside the inclusions. For the numerical solution of these equations, a class of Gaussian approximating functions is used. The method based on these functions is mesh free. For such functions, the elements of the matrix of the discretized system are combinations of explicit analytical functions and five standard 1D-integrals that can be tabulated. Thus, the numerical integration is excluded from the construction of the matrix of the discretized problem. For regular node grids, the matrix of the discretized system has Toeplitz's properties, and Fast Fourier Transform technique can be used for calculation matrix-vector products of such matrices.
Machine Learning and Inverse Problem in Geodynamics
NASA Astrophysics Data System (ADS)
Shahnas, M. H.; Yuen, D. A.; Pysklywec, R.
2017-12-01
During the past few decades numerical modeling and traditional HPC have been widely deployed in many diverse fields for problem solutions. However, in recent years the rapid emergence of machine learning (ML), a subfield of the artificial intelligence (AI), in many fields of sciences, engineering, and finance seems to mark a turning point in the replacement of traditional modeling procedures with artificial intelligence-based techniques. The study of the circulation in the interior of Earth relies on the study of high pressure mineral physics, geochemistry, and petrology where the number of the mantle parameters is large and the thermoelastic parameters are highly pressure- and temperature-dependent. More complexity arises from the fact that many of these parameters that are incorporated in the numerical models as input parameters are not yet well established. In such complex systems the application of machine learning algorithms can play a valuable role. Our focus in this study is the application of supervised machine learning (SML) algorithms in predicting mantle properties with the emphasis on SML techniques in solving the inverse problem. As a sample problem we focus on the spin transition in ferropericlase and perovskite that may cause slab and plume stagnation at mid-mantle depths. The degree of the stagnation depends on the degree of negative density anomaly at the spin transition zone. The training and testing samples for the machine learning models are produced by the numerical convection models with known magnitudes of density anomaly (as the class labels of the samples). The volume fractions of the stagnated slabs and plumes which can be considered as measures for the degree of stagnation are assigned as sample features. The machine learning models can determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at mid-mantle depths. Employing support vector machine (SVM) algorithms we show that SML techniques can successfully predict the magnitude of the mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex problems in mantle dynamics by employing deep learning algorithms for estimation of mantle properties such as viscosity, elastic parameters, and thermal and chemical anomalies.
Natural Language Processing Based Instrument for Classification of Free Text Medical Records
2016-01-01
According to the Ministry of Labor, Health and Social Affairs of Georgia a new health management system has to be introduced in the nearest future. In this context arises the problem of structuring and classifying documents containing all the history of medical services provided. The present work introduces the instrument for classification of medical records based on the Georgian language. It is the first attempt of such classification of the Georgian language based medical records. On the whole 24.855 examination records have been studied. The documents were classified into three main groups (ultrasonography, endoscopy, and X-ray) and 13 subgroups using two well-known methods: Support Vector Machine (SVM) and K-Nearest Neighbor (KNN). The results obtained demonstrated that both machine learning methods performed successfully, with a little supremacy of SVM. In the process of classification a “shrink” method, based on features selection, was introduced and applied. At the first stage of classification the results of the “shrink” case were better; however, on the second stage of classification into subclasses 23% of all documents could not be linked to only one definite individual subclass (liver or binary system) due to common features characterizing these subclasses. The overall results of the study were successful. PMID:27668260
Biometric feature embedding using robust steganography technique
NASA Astrophysics Data System (ADS)
Rashid, Rasber D.; Sellahewa, Harin; Jassim, Sabah A.
2013-05-01
This paper is concerned with robust steganographic techniques to hide and communicate biometric data in mobile media objects like images, over open networks. More specifically, the aim is to embed binarised features extracted using discrete wavelet transforms and local binary patterns of face images as a secret message in an image. The need for such techniques can arise in law enforcement, forensics, counter terrorism, internet/mobile banking and border control. What differentiates this problem from normal information hiding techniques is the added requirement that there should be minimal effect on face recognition accuracy. We propose an LSB-Witness embedding technique in which the secret message is already present in the LSB plane but instead of changing the cover image LSB values, the second LSB plane will be changed to stand as a witness/informer to the receiver during message recovery. Although this approach may affect the stego quality, it is eliminating the weakness of traditional LSB schemes that is exploited by steganalysis techniques for LSB, such as PoV and RS steganalysis, to detect the existence of secrete message. Experimental results show that the proposed method is robust against PoV and RS attacks compared to other variants of LSB. We also discussed variants of this approach and determine capacity requirements for embedding face biometric feature vectors while maintain accuracy of face recognition.