Sample records for skew convolution semigroups

  1. Non-stationary blind deconvolution of medical ultrasound scans

    NASA Astrophysics Data System (ADS)

    Michailovich, Oleg V.

    2017-03-01

    In linear approximation, the formation of a radio-frequency (RF) ultrasound image can be described based on a standard convolution model in which the image is obtained as a result of convolution of the point spread function (PSF) of the ultrasound scanner in use with a tissue reflectivity function (TRF). Due to the band-limited nature of the PSF, the RF images can only be acquired at a finite spatial resolution, which is often insufficient for proper representation of the diagnostic information contained in the TRF. One particular way to alleviate this problem is by means of image deconvolution, which is usually performed in a "blind" mode, when both PSF and TRF are estimated at the same time. Despite its proven effectiveness, blind deconvolution (BD) still suffers from a number of drawbacks, chief among which stems from its dependence on a stationary convolution model, which is incapable of accounting for the spatial variability of the PSF. As a result, virtually all existing BD algorithms are applied to localized segments of RF images. In this work, we introduce a novel method for non-stationary BD, which is capable of recovering the TRF concurrently with the spatially variable PSF. Particularly, our approach is based on semigroup theory which allows one to describe the effect of such a PSF in terms of the action of a properly defined linear semigroup. The approach leads to a tractable optimization problem, which can be solved using standard numerical methods. The effectiveness of the proposed solution is supported by experiments with in vivo ultrasound data.

  2. Averaging of random walks and shift-invariant measures on a Hilbert space

    NASA Astrophysics Data System (ADS)

    Sakbaev, V. Zh.

    2017-06-01

    We study random walks in a Hilbert space H and representations using them of solutions of the Cauchy problem for differential equations whose initial conditions are numerical functions on H. We construct a finitely additive analogue of the Lebesgue measure: a nonnegative finitely additive measure λ that is defined on a minimal subset ring of an infinite-dimensional Hilbert space H containing all infinite-dimensional rectangles with absolutely converging products of the side lengths and is invariant under shifts and rotations in H. We define the Hilbert space H of equivalence classes of complex-valued functions on H that are square integrable with respect to a shift-invariant measure λ. Using averaging of the shift operator in H over random vectors in H with a distribution given by a one-parameter semigroup (with respect to convolution) of Gaussian measures on H, we define a one-parameter semigroup of contracting self-adjoint transformations on H, whose generator is called the diffusion operator. We obtain a representation of solutions of the Cauchy problem for the Schrödinger equation whose Hamiltonian is the diffusion operator.

  3. Ideal Theory in Semigroups Based on Intersectional Soft Sets

    PubMed Central

    Song, Seok Zun; Jun, Young Bae

    2014-01-01

    The notions of int-soft semigroups and int-soft left (resp., right) ideals are introduced, and several properties are investigated. Using these notions and the notion of inclusive set, characterizations of subsemigroups and left (resp., right) ideals are considered. Using the notion of int-soft products, characterizations of int-soft semigroups and int-soft left (resp., right) ideals are discussed. We prove that the soft intersection of int-soft left (resp., right) ideals (resp., int-soft semigroups) is also int-soft left (resp., right) ideals (resp., int-soft semigroups). The concept of int-soft quasi-ideals is also introduced, and characterization of a regular semigroup is discussed. PMID:25101310

  4. Stability of gradient semigroups under perturbations

    NASA Astrophysics Data System (ADS)

    Aragão-Costa, E. R.; Caraballo, T.; Carvalho, A. N.; Langa, J. A.

    2011-07-01

    In this paper we prove that gradient-like semigroups (in the sense of Carvalho and Langa (2009 J. Diff. Eqns 246 2646-68)) are gradient semigroups (possess a Lyapunov function). This is primarily done to provide conditions under which gradient semigroups, in a general metric space, are stable under perturbation exploiting the known fact (see Carvalho and Langa (2009 J. Diff. Eqns 246 2646-68)) that gradient-like semigroups are stable under perturbation. The results presented here were motivated by the work carried out in Conley (1978 Isolated Invariant Sets and the Morse Index (CBMS Regional Conference Series in Mathematics vol 38) (RI: American Mathematical Society Providence)) for groups in compact metric spaces (see also Rybakowski (1987 The Homotopy Index and Partial Differential Equations (Universitext) (Berlin: Springer)) for the Morse decomposition of an invariant set for a semigroup on a compact metric space).

  5. Topology-preserving quantum deformation with non-numerical parameter

    NASA Astrophysics Data System (ADS)

    Aukhadiev, Marat; Grigoryan, Suren; Lipacheva, Ekaterina

    2013-11-01

    We introduce a class of compact quantum semigroups, that we call semigroup deformations of compact Abelian qroups. These objects arise from reduced semigroup -algebras, the generalization of the Toeplitz algebra. We study quantum subgroups, quantum projective spaces and quantum quotient groups for such objects, and show that the group is contained as a compact quantum subgroup in the deformation of itself. The connection with the weak Hopf algebra notion is described. We give a grading on the -algebra of the compact quantum semigroups constructed.

  6. Open Quantum Systems and Classical Trajectories

    NASA Astrophysics Data System (ADS)

    Rebolledo, Rolando

    2004-09-01

    A Quantum Markov Semigroup consists of a family { T} = ({ T}t)_{t ∈ B R+} of normal ω*- continuous completely positive maps on a von Neumann algebra 𝔐 which preserve the unit and satisfy the semigroup property. This class of semigroups has been extensively used to represent open quantum systems. This article is aimed at studying the existence of a { T} -invariant abelian subalgebra 𝔄 of 𝔐. When this happens, the restriction of { T}t to 𝔄 defines a classical Markov semigroup T = (Tt)t ∈ ∝ + say, associated to a classical Markov process X = (Xt)t ∈ ∝ +. The structure (𝔄, T, X) unravels the quantum Markov semigroup { T} , providing a bridge between open quantum systems and classical stochastic processes.

  7. Quantitative recurrence for free semigroup actions

    NASA Astrophysics Data System (ADS)

    Carvalho, Maria; Rodrigues, Fagner B.; Varandas, Paulo

    2018-03-01

    We consider finitely generated free semigroup actions on a compact metric space and obtain quantitative information on Poincaré recurrence, average first return time and hitting frequency for the random orbits induced by the semigroup action. Besides, we relate the recurrence to balls with the rates of expansion of the semigroup generators and the topological entropy of the semigroup action. Finally, we establish a partial variational principle and prove an ergodic optimization for this kind of dynamical action. MC has been financially supported by CMUP (UID/MAT/00144/2013), which is funded by FCT (Portugal) with national (MEC) and European structural funds (FEDER) under the partnership agreement PT2020. FR and PV were partially supported by BREUDS. PV has also benefited from a fellowship awarded by CNPq-Brazil and is grateful to the Faculty of Sciences of the University of Porto for the excellent research conditions.

  8. Continuity properties of the semi-group and its integral kernel in non-relativistic QED

    NASA Astrophysics Data System (ADS)

    Matte, Oliver

    2016-07-01

    Employing recent results on stochastic differential equations associated with the standard model of non-relativistic quantum electrodynamics by B. Güneysu, J. S. Møller, and the present author, we study the continuity of the corresponding semi-group between weighted vector-valued Lp-spaces, continuity properties of elements in the range of the semi-group, and the pointwise continuity of an operator-valued semi-group kernel. We further discuss the continuous dependence of the semi-group and its integral kernel on model parameters. All these results are obtained for Kato decomposable electrostatic potentials and the actual assumptions on the model are general enough to cover the Nelson model as well. As a corollary, we obtain some new pointwise exponential decay and continuity results on elements of low-energetic spectral subspaces of atoms or molecules that also take spin into account. In a simpler situation where spin is neglected, we explain how to verify the joint continuity of positive ground state eigenvectors with respect to spatial coordinates and model parameters. There are no smallness assumptions imposed on any model parameter.

  9. On quantum symmetries of compact metric spaces

    NASA Astrophysics Data System (ADS)

    Chirvasitu, Alexandru

    2015-08-01

    An action of a compact quantum group on a compact metric space (X , d) is (D)-isometric if the distance function is preserved by a diagonal action on X × X. In this study, we show that an isometric action in this sense has the following additional property: the corresponding action on the algebra of continuous functions on X by the convolution semigroup of probability measures on the quantum group contracts Lipschitz constants. In other words, it is isometric in another sense due to Li, Quaegebeur, and Sabbe, which partially answers a question posed by Goswami. We also introduce other possible notions of isometric quantum actions in terms of the Wasserstein p-distances between probability measures on X for p ≥ 1, which are used extensively in optimal transportation. Indeed, all of these definitions of quantum isometry belong to a hierarchy of implications, where the two described above lie at the extreme ends of the hierarchy. We conjecture that they are all equivalent.

  10. Comparison of Texture Analysis Techniques in Both Frequency and Spatial Domains for Cloud Feature Extraction

    DTIC Science & Technology

    1992-01-01

    entropy , energy. variance, skewness, and object. It can also be applied to an image of a phenomenon. It kurtosis. These parameters are then used as...statistic. The co-occurrence matrix method is used in this study to derive texture values of entropy . Limogeneity. energy (similar to the GLDV angular...from working with the co-occurrence matrix method. Seven convolution sizes were chosen to derive the texture values of entropy , local homogeneity, and

  11. A Study of Strong Stability of Distributed Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cataltepe, Tayfun

    1989-01-01

    The strong stability of distributed systems is studied and the problem of characterizing strongly stable semigroups of operators associated with distributed systems is addressed. Main emphasis is on contractive systems. Three different approaches to characterization of strongly stable contractive semigroups are developed. The first one is an operator theoretical approach. Using the theory of dilations, it is shown that every strongly stable contractive semigroup is related to the left shift semigroup on an L(exp 2) space. Then, a decomposition for the state space which identifies strongly stable and unstable states is introduced. Based on this decomposition, conditions for a contractive semigroup to be strongly stable are obtained. Finally, extensions of Lyapunov's equation for distributed parameter systems are investigated. Sufficient conditions for weak and strong stabilities of uniformly bounded semigroups are obtained by relaxing the equivalent norm condition on the right hand side of the Lyanupov equation. These characterizations are then applied to the problem of feedback stabilization. First, it is shown via the state space decomposition that under certain conditions a contractive system (A,B) can be strongly stabilized by the feedback -B(*). Then, application of the extensions of the Lyapunov equation results in sufficient conditions for weak, strong, and exponential stabilizations of contractive systems by the feedback -B(*). Finally, it is shown that for a contractive system, the first derivative of x with respect to time = Ax + Bu (where B is any linear bounded operator), there is a related linear quadratic regulator problem and a corresponding steady state Riccati equation which always has a bounded nonnegative solution.

  12. On the efficacy of procedures to normalize Ex-Gaussian distributions.

    PubMed

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2014-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.

  13. Analytic semigroups: Applications to inverse problems for flexible structures

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Rebnord, D. A.

    1990-01-01

    Convergence and stability results for least squares inverse problems involving systems described by analytic semigroups are presented. The practical importance of these results is demonstrated by application to several examples from problems of estimation of material parameters in flexible structures using accelerometer data.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, Y.; Horwitz, L. P.; Eisenberg, E.

    We discuss the quantum Lax-Phillips theory of scattering and unstable systems. In this framework, the decay of an unstable system is described by a semigroup. The spectrum of the generator of the semigroup corresponds to the singularities of the Lax-Phillips S-matrix. In the case of discrete (complex) spectrum of the generator of the semigroup, associated with resonances, the decay law is exactly exponential. The states corresponding to these resonances (eigenfunctions of the generator of the semigroup) lie in the Lax-Phillips Hilbert space, and therefore all physical properties of the resonant states can be computed. We show that the Lax-Phillips S-matrixmore » is unitarily related to the S-matrix of standard scattering theory by a unitary transformation parametrized by the spectral variable σ of the Lax-Phillips theory. Analytic continuation in σ has some of the properties of a method developed some time ago for application to dilation analytic potentials. We work out an illustrative example using a Lee-Friedrichs model for the underlying dynamical system.« less

  15. A semigroup approach to the strong ergodic theorem of the multistate stable population process.

    PubMed

    Inaba, H

    1988-01-01

    "In this paper we first formulate the dynamics of multistate stable population processes as a partial differential equation. Next, we rewrite this equation as an abstract differential equation in a Banach space, and solve it by using the theory of strongly continuous semigroups of bounded linear operators. Subsequently, we investigate the asymptotic behavior of this semigroup to show the strong ergodic theorem which states that there exists a stable distribution independent of the initial distribution. Finally, we introduce the dual problem in order to obtain a logical definition for the reproductive value and we discuss its applications." (SUMMARY IN FRE) excerpt

  16. Generalized Friedland's theorem for C0-semigroups

    NASA Astrophysics Data System (ADS)

    Cichon, Dariusz; Jung, Il Bong; Stochel, Jan

    2008-07-01

    Friedland's characterization of bounded normal operators is shown to hold for infinitesimal generators of C0-semigroups. New criteria for normality of bounded operators are furnished in terms of Hamburger moment problem. All this is achieved with the help of the celebrated Ando's theorem on paranormal operators.

  17. Quantum Markov Semigroups with Unbounded Generator and Time Evolution of the Support Projection of a State

    NASA Astrophysics Data System (ADS)

    Gliouez, Souhir; Hachicha, Skander; Nasroui, Ikbel

    We characterize the support projection of a state evolving under the action of a quantum Markov semigroup with unbounded generator represented in the generalized GKSL form and a quantum version of the classical Lévy-Austin-Ornstein theorem.

  18. The interplay between group crossed products, semigroup crossed products and toeplitz algebras

    NASA Astrophysics Data System (ADS)

    Yusnitha, I.

    2018-05-01

    Realization of group crossed products constructed by decomposition, as semigroup crossed products. And connected it to Toeplitz algebra of ordered group quotient to get some preliminaries description for the further study on the structure of Toeplitz algebras of ordered group which is finitely generated.

  19. The damped wave equation with unbounded damping

    NASA Astrophysics Data System (ADS)

    Freitas, Pedro; Siegl, Petr; Tretter, Christiane

    2018-06-01

    We analyze new phenomena arising in linear damped wave equations on unbounded domains when the damping is allowed to become unbounded at infinity. We prove the generation of a contraction semigroup, study the relation between the spectra of the semigroup generator and the associated quadratic operator function, the convergence of non-real eigenvalues in the asymptotic regime of diverging damping on a subdomain, and we investigate the appearance of essential spectrum on the negative real axis. We further show that the presence of the latter prevents exponential estimates for the semigroup and turns out to be a robust effect that cannot be easily canceled by adding a positive potential. These analytic results are illustrated by examples.

  20. Feynman formulas for semigroups generated by an iterated Laplace operator

    NASA Astrophysics Data System (ADS)

    Buzinov, M. S.

    2017-04-01

    In the present paper, we find representations of a one-parameter semigroup generated by a finite sum of iterated Laplace operators and an additive perturbation (the potential). Such semigroups and the evolution equations corresponding to them find applications in the field of physics, chemistry, biology, and pattern recognition. The representations mentioned above are obtained in the form of Feynman formulas, i.e., in the form of a limit of multiple integrals as the multiplicity tends to infinity. The term "Feynman formula" was proposed by Smolyanov. Smolyanov's approach uses Chernoff's theorems. A simple form of representations thus obtained enables one to use them for numerical modeling the dynamics of the evolution system as a method for the approximation of solutions of equations. The problems considered in this note can be treated using the approach suggested by Remizov (see also the monograph of Smolyanov and Shavgulidze on path integrals). The representations (of semigroups) obtained in this way are more complicated than those given by the Feynman formulas; however, it is possible to bypass some analytical difficulties.

  1. On Superstability of Semigroups

    NASA Technical Reports Server (NTRS)

    Balakrishnan. A. V.

    1997-01-01

    This paper presents a brief report on superstable semigroups - abstract theory and some applications thereof. The notion of super stability is a strengthening of exponential stability and occurs in Timoshenko models of structures with self-straining material using pure (idealized) rate feed- back. It is also relevant to the problem of Riesz bases of eigenfunctions of infinitesimal generators under perturbation.

  2. On the efficacy of procedures to normalize Ex-Gaussian distributions

    PubMed Central

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2015-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588

  3. Feynman formulae and phase space Feynman path integrals for tau-quantization of some Lévy-Khintchine type Hamilton functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butko, Yana A., E-mail: yanabutko@yandex.ru, E-mail: kinderknecht@math.uni-sb.de; Grothaus, Martin, E-mail: grothaus@mathematik.uni-kl.de; Smolyanov, Oleg G., E-mail: Smolyanov@yandex.ru

    2016-02-15

    Evolution semigroups generated by pseudo-differential operators are considered. These operators are obtained by different (parameterized by a number τ) procedures of quantization from a certain class of functions (or symbols) defined on the phase space. This class contains Hamilton functions of particles with variable mass in magnetic and potential fields and more general symbols given by the Lévy-Khintchine formula. The considered semigroups are represented as limits of n-fold iterated integrals when n tends to infinity. Such representations are called Feynman formulae. Some of these representations are constructed with the help of another pseudo-differential operator, obtained by the same procedure ofmore » quantization; such representations are called Hamiltonian Feynman formulae. Some representations are based on integral operators with elementary kernels; these are called Lagrangian Feynman formulae. Langrangian Feynman formulae provide approximations of evolution semigroups, suitable for direct computations and numerical modeling of the corresponding dynamics. Hamiltonian Feynman formulae allow to represent the considered semigroups by means of Feynman path integrals. In the article, a family of phase space Feynman pseudomeasures corresponding to different procedures of quantization is introduced. The considered evolution semigroups are represented as phase space Feynman path integrals with respect to these Feynman pseudomeasures, i.e., different quantizations correspond to Feynman path integrals with the same integrand but with respect to different pseudomeasures. This answers Berezin’s problem of distinguishing a procedure of quantization on the language of Feynman path integrals. Moreover, the obtained Lagrangian Feynman formulae allow also to calculate these phase space Feynman path integrals and to connect them with some functional integrals with respect to probability measures.« less

  4. On the membrane approximation in isothermal film casting

    NASA Astrophysics Data System (ADS)

    Hagen, Thomas

    2014-08-01

    In this work, a one-dimensional model for isothermal film casting is studied. Film casting is an important engineering process to manufacture thin films and sheets from a highly viscous polymer melt. The model equations account for variations in film width and film thickness, and arise from thinness and kinematic assumptions for the free liquid film. The first aspect of our study is a rigorous discussion of the existence and uniqueness of stationary solutions. This objective is approached via the argument principle, exploiting the homotopy invariance of a family of analytic functions. As our second objective, we analyze the linearization of the governing equations about stationary solutions. It is shown that solutions for the associated boundary-initial value problem are given by a strongly continuous semigroup of bounded linear operators. To reach this result, we cast the relevant Cauchy problem in a more accessible form. These transformed equations allow us insight into the regularity of the semigroup, thus yielding the validity of the spectral mapping theorem for the semigroup and the spectrally determined growth property.

  5. Semigroup theory and numerical approximation for equations in linear viscoelasticity

    NASA Technical Reports Server (NTRS)

    Fabiano, R. H.; Ito, K.

    1990-01-01

    A class of abstract integrodifferential equations used to model linear viscoelastic beams is investigated analytically, applying a Hilbert-space approach. The basic equation is rewritten as a Cauchy problem, and its well-posedness is demonstrated. Finite-dimensional subspaces of the state space and an estimate of the state operator are obtained; approximation schemes for the equations are constructed; and the convergence is proved using the Trotter-Kato theorem of linear semigroup theory. The actual convergence behavior of different approximations is demonstrated in numerical computations, and the results are presented in tables.

  6. Towards an Effective Theory of Reformulation. Part 1; Semantics

    NASA Technical Reports Server (NTRS)

    Benjamin, D. Paul

    1992-01-01

    This paper describes an investigation into the structure of representations of sets of actions, utilizing semigroup theory. The goals of this project are twofold: to shed light on the relationship between tasks and representations, leading to a classification of tasks according to the representations they admit; and to develop techniques for automatically transforming representations so as to improve problem-solving performance. A method is demonstrated for automatically generating serial algorithms for representations whose actions form a finite group. This method is then extended to representations whose actions form a finite inverse semigroup.

  7. Convolutional neural network for high-accuracy functional near-infrared spectroscopy in a brain-computer interface: three-class classification of rest, right-, and left-hand motor execution.

    PubMed

    Trakoolwilaiwan, Thanawin; Behboodi, Bahareh; Lee, Jaeseok; Kim, Kyungsoo; Choi, Ji-Woong

    2018-01-01

    The aim of this work is to develop an effective brain-computer interface (BCI) method based on functional near-infrared spectroscopy (fNIRS). In order to improve the performance of the BCI system in terms of accuracy, the ability to discriminate features from input signals and proper classification are desired. Previous studies have mainly extracted features from the signal manually, but proper features need to be selected carefully. To avoid performance degradation caused by manual feature selection, we applied convolutional neural networks (CNNs) as the automatic feature extractor and classifier for fNIRS-based BCI. In this study, the hemodynamic responses evoked by performing rest, right-, and left-hand motor execution tasks were measured on eight healthy subjects to compare performances. Our CNN-based method provided improvements in classification accuracy over conventional methods employing the most commonly used features of mean, peak, slope, variance, kurtosis, and skewness, classified by support vector machine (SVM) and artificial neural network (ANN). Specifically, up to 6.49% and 3.33% improvement in classification accuracy was achieved by CNN compared with SVM and ANN, respectively.

  8. Analogues of Chernoff's theorem and the Lie-Trotter theorem

    NASA Astrophysics Data System (ADS)

    Neklyudov, Alexander Yu

    2009-10-01

    This paper is concerned with the abstract Cauchy problem \\dot x=\\mathrm{A}x, x(0)=x_0\\in\\mathscr{D}(\\mathrm{A}), where \\mathrm{A} is a densely defined linear operator on a Banach space \\mathbf X. It is proved that a solution x(\\,\\cdot\\,) of this problem can be represented as the weak limit \\lim_{n\\to\\infty}\\{\\mathrm F(t/n)^nx_0\\}, where the function \\mathrm F\\colon \\lbrack 0,\\infty)\\mapsto\\mathscr L(\\mathrm X) satisfies the equality \\mathrm F'(0)y=\\mathrm{A}y, y\\in\\mathscr{D}(\\mathrm{A}), for a natural class of operators. As distinct from Chernoff's theorem, the existence of a global solution to the Cauchy problem is not assumed. Based on this result, necessary and sufficient conditions are found for the linear operator \\mathrm{C} to be closable and for its closure to be the generator of a C_0-semigroup. Also, we obtain new criteria for the sum of two generators of C_0-semigroups to be the generator of a C_0-semigroup and for the Lie-Trotter formula to hold. Bibliography: 13 titles.

  9. The Clifford Deformation of the Hermite Semigroup

    NASA Astrophysics Data System (ADS)

    De Bie, Hendrik; Örsted, Bent; Somberg, Petr; Souček, Vladimir

    2013-02-01

    This paper is a continuation of the paper [De Bie H., Örsted B., Somberg P., Souček V., Trans. Amer. Math. Soc. 364 (2012), 3875-3902], investigating a natural radial deformation of the Fourier transform in the setting of Clifford analysis. At the same time, it gives extensions of many results obtained in [Ben Saïd S., Kobayashi T., Örsted B., Compos. Math. 148 (2012), 1265-1336]. We establish the analogues of Bochner's formula and the Heisenberg uncertainty relation in the framework of the (holomorphic) Hermite semigroup, and also give a detailed analytic treatment of the series expansion of the associated integral transform.

  10. Area law violations and quantum phase transitions in modified Motzkin walk spin chains

    NASA Astrophysics Data System (ADS)

    Sugino, Fumihiko; Padmanabhan, Pramod

    2018-01-01

    Area law violations for entanglement entropy in the form of a square root have recently been studied for one-dimensional frustration-free quantum systems based on the Motzkin walks and their variations. Here we consider a Motzkin walk with a different Hilbert space on each step of the walk spanned by the elements of a symmetric inverse semigroup with the direction of each step governed by its algebraic structure. This change alters the number of paths allowed in the Motzkin walk and introduces a ground state degeneracy that is sensitive to boundary perturbations. We study the frustration-free spin chains based on three symmetric inverse semigroups, \

  11. Random complex dynamics and devil's coliseums

    NASA Astrophysics Data System (ADS)

    Sumi, Hiroki

    2015-04-01

    We investigate the random dynamics of polynomial maps on the Riemann sphere \\hat{\\Bbb{C}} and the dynamics of semigroups of polynomial maps on \\hat{\\Bbb{C}} . In particular, the dynamics of a semigroup G of polynomials whose planar postcritical set is bounded and the associated random dynamics are studied. In general, the Julia set of such a G may be disconnected. We show that if G is such a semigroup, then regarding the associated random dynamics, the chaos of the averaged system disappears in the C0 sense, and the function T∞ of probability of tending to ∞ \\in \\hat{\\Bbb{C}} is Hölder continuous on \\hat{\\Bbb{C}} and varies only on the Julia set of G. Moreover, the function T∞ has a kind of monotonicity. It turns out that T∞ is a complex analogue of the devil's staircase, and we call T∞ a ‘devil’s coliseum'. We investigate the details of T∞ when G is generated by two polynomials. In this case, T∞ varies precisely on the Julia set of G, which is a thin fractal set. Moreover, under this condition, we investigate the pointwise Hölder exponents of T∞.

  12. Lévy targeting and the principle of detailed balance.

    PubMed

    Garbaczewski, Piotr; Stephanovich, Vladimir

    2011-07-01

    We investigate confining mechanisms for Lévy flights under premises of the principle of detailed balance. In this case, the master equation of the jump-type process admits a transformation to the Lévy-Schrödinger semigroup dynamics akin to a mapping of the Fokker-Planck equation into the generalized diffusion equation. This sets a correspondence between above two stochastic dynamical systems, within which we address a (stochastic) targeting problem for an arbitrary stability index μ ε (0,2) of symmetric Lévy drivers. Namely, given a probability density function, specify the semigroup potential, and thence the jump-type dynamics for which this PDF is actually a long-time asymptotic (target) solution of the master equation. Here, an asymptotic behavior of different μ-motion scenarios ceases to depend on μ. That is exemplified by considering Gaussian and Cauchy family target PDFs. A complementary problem of the reverse engineering is analyzed: given a priori a semigroup potential, quantify how sensitive upon the choice of the μ driver is an asymptotic behavior of solutions of the associated master equation and thus an invariant PDF itself. This task is accomplished for so-called μ family of Lévy oscillators.

  13. Takeover times for a simple model of network infection.

    PubMed

    Ottino-Löffler, Bertrand; Scott, Jacob G; Strogatz, Steven H

    2017-07-01

    We study a stochastic model of infection spreading on a network. At each time step a node is chosen at random, along with one of its neighbors. If the node is infected and the neighbor is susceptible, the neighbor becomes infected. How many time steps T does it take to completely infect a network of N nodes, starting from a single infected node? An analogy to the classic "coupon collector" problem of probability theory reveals that the takeover time T is dominated by extremal behavior, either when there are only a few infected nodes near the start of the process or a few susceptible nodes near the end. We show that for N≫1, the takeover time T is distributed as a Gumbel distribution for the star graph, as the convolution of two Gumbel distributions for a complete graph and an Erdős-Rényi random graph, as a normal for a one-dimensional ring and a two-dimensional lattice, and as a family of intermediate skewed distributions for d-dimensional lattices with d≥3 (these distributions approach the convolution of two Gumbel distributions as d approaches infinity). Connections to evolutionary dynamics, cancer, incubation periods of infectious diseases, first-passage percolation, and other spreading phenomena in biology and physics are discussed.

  14. Takeover times for a simple model of network infection

    NASA Astrophysics Data System (ADS)

    Ottino-Löffler, Bertrand; Scott, Jacob G.; Strogatz, Steven H.

    2017-07-01

    We study a stochastic model of infection spreading on a network. At each time step a node is chosen at random, along with one of its neighbors. If the node is infected and the neighbor is susceptible, the neighbor becomes infected. How many time steps T does it take to completely infect a network of N nodes, starting from a single infected node? An analogy to the classic "coupon collector" problem of probability theory reveals that the takeover time T is dominated by extremal behavior, either when there are only a few infected nodes near the start of the process or a few susceptible nodes near the end. We show that for N ≫1 , the takeover time T is distributed as a Gumbel distribution for the star graph, as the convolution of two Gumbel distributions for a complete graph and an Erdős-Rényi random graph, as a normal for a one-dimensional ring and a two-dimensional lattice, and as a family of intermediate skewed distributions for d -dimensional lattices with d ≥3 (these distributions approach the convolution of two Gumbel distributions as d approaches infinity). Connections to evolutionary dynamics, cancer, incubation periods of infectious diseases, first-passage percolation, and other spreading phenomena in biology and physics are discussed.

  15. Asymptotic behavior of distributions of mRNA and protein levels in a model of stochastic gene expression

    NASA Astrophysics Data System (ADS)

    Bobrowski, Adam; Lipniacki, Tomasz; Pichór, Katarzyna; Rudnicki, Ryszard

    2007-09-01

    The paper is devoted to a stochastic process introduced in the recent paper by Lipniacki et al. [T. Lipniacki, P. Paszek, A. Marciniak-Czochra, A.RE Brasier, M. Kimmel, Transcriptional stochasticity in gene expression, JE Theor. Biol. 238 (2006) 348-367] in modelling gene expression in eukaryotes. Starting from the full generator of the process we show that its distributions satisfy a (Fokker-Planck-type) system of partial differential equations. Then, we construct a c0 Markov semigroup in L1 space corresponding to this system. The main result of the paper is asymptotic stability of the involved semigroup in the set of densities.

  16. Asymptotics of the evolution semigroup associated with a scalar field in the presence of a non-linear electromagnetic field

    NASA Astrophysics Data System (ADS)

    Albeverio, Sergio; Tamura, Hiroshi

    2018-04-01

    We consider a model describing the coupling of a vector-valued and a scalar homogeneous Markovian random field over R4, interpreted as expressing the interaction between a charged scalar quantum field coupled with a nonlinear quantized electromagnetic field. Expectations of functionals of the random fields are expressed by Brownian bridges. Using this, together with Feynman-Kac-Itô type formulae and estimates on the small time and large time behaviour of Brownian functionals, we prove asymptotic upper and lower bounds on the kernel of the transition semigroup for our model. The upper bound gives faster than exponential decay for large distances of the corresponding resolvent (propagator).

  17. THE SEMIGROUP OF METRIC MEASURE SPACES AND ITS INFINITELY DIVISIBLE PROBABILITY MEASURES

    PubMed Central

    EVANS, STEVEN N.; MOLCHANOV, ILYA

    2015-01-01

    A metric measure space is a complete, separable metric space equipped with a probability measure that has full support. Two such spaces are equivalent if they are isometric as metric spaces via an isometry that maps the probability measure on the first space to the probability measure on the second. The resulting set of equivalence classes can be metrized with the Gromov–Prohorov metric of Greven, Pfaffelhuber and Winter. We consider the natural binary operation ⊞ on this space that takes two metric measure spaces and forms their Cartesian product equipped with the sum of the two metrics and the product of the two probability measures. We show that the metric measure spaces equipped with this operation form a cancellative, commutative, Polish semigroup with a translation invariant metric. There is an explicit family of continuous semicharacters that is extremely useful for, inter alia, establishing that there are no infinitely divisible elements and that each element has a unique factorization into prime elements. We investigate the interaction between the semigroup structure and the natural action of the positive real numbers on this space that arises from scaling the metric. For example, we show that for any given positive real numbers a, b, c the trivial space is the only space that satisfies a ⊞ b = c . We establish that there is no analogue of the law of large numbers: if X1, X2, … is an identically distributed independent sequence of random spaces, then no subsequence of 1n⊞k=1nXk converges in distribution unless each Xk is almost surely equal to the trivial space. We characterize the infinitely divisible probability measures and the Lévy processes on this semigroup, characterize the stable probability measures and establish a counterpart of the LePage representation for the latter class. PMID:28065980

  18. Learning the Relationship between Galaxy Spectra and Star Formation Histories

    NASA Astrophysics Data System (ADS)

    Lovell, Christopher; Acquaviva, Viviana; Iyer, Kartheik; Gawiser, Eric

    2018-01-01

    We explore novel approaches to the problem of predicting a galaxy’s star formation history (SFH) from its Spectral Energy Distribution (SED). Traditional approaches to SED template fitting use constant or exponentially declining SFHs, and are known to incur significant bias in the inferred SFHs, which are typically skewed toward younger stellar populations. Machine learning approaches, including tree ensemble methods and convolutional neural networks, would not be affected by the same bias, and may work well in recovering unbiased and multi-episodic star formation histories. We use a supervised approach whereby models are trained using synthetic spectra, generated from three state of the art hydrodynamical simulations, including nebular emission. We explore how SED feature maps can be used to highlight areas of the spectrum with the highest predictive power and discuss the limitations of the approach when applied to real data.

  19. Quantum Markov semigroups constructed from quantum Bernoulli noises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Caishi; Chen, Jinshu

    2016-02-15

    Quantum Bernoulli noises (QBNs) are the family of annihilation and creation operators acting on Bernoulli functionals, which can describe a two-level quantum system with infinitely many sites. In this paper, we consider the problem to construct quantum Markov semigroups (QMSs) directly from QBNs. We first establish several new theorems concerning QBNs. In particular, we define the number operator acting on Bernoulli functionals by using the canonical orthonormal basis, prove its self-adjoint property, and describe precisely its connections with QBN in a mathematically rigorous way. We then show the possibility to construct QMS directly from QBN. This is done by combiningmore » the general results on QMS with our new results on QBN obtained here. Finally, we examine some properties of QMS constructed from QBN.« less

  20. Prediction, time variance, and classification of hydraulic response to recharge in two karst aquifers

    USGS Publications Warehouse

    Long, Andrew J.; Mahler, Barbara J.

    2013-01-01

    Many karst aquifers are rapidly filled and depleted and therefore are likely to be susceptible to changes in short-term climate variability. Here we explore methods that could be applied to model site-specific hydraulic responses, with the intent of simulating these responses to different climate scenarios from high-resolution climate models. We compare hydraulic responses (spring flow, groundwater level, stream base flow, and cave drip) at several sites in two karst aquifers: the Edwards aquifer (Texas, USA) and the Madison aquifer (South Dakota, USA). A lumped-parameter model simulates nonlinear soil moisture changes for estimation of recharge, and a time-variant convolution model simulates the aquifer response to this recharge. Model fit to data is 2.4% better for calibration periods than for validation periods according to the Nash–Sutcliffe coefficient of efficiency, which ranges from 0.53 to 0.94 for validation periods. We use metrics that describe the shapes of the impulse-response functions (IRFs) obtained from convolution modeling to make comparisons in the distribution of response times among sites and between aquifers. Time-variant IRFs were applied to 62% of the sites. Principal component analysis (PCA) of metrics describing the shapes of the IRFs indicates three principal components that together account for 84% of the variability in IRF shape: the first is related to IRF skewness and temporal spread and accounts for 51% of the variability; the second and third largely are related to time-variant properties and together account for 33% of the variability. Sites with IRFs that dominantly comprise exponential curves are separated geographically from those dominantly comprising lognormal curves in both aquifers as a result of spatial heterogeneity. The use of multiple IRF metrics in PCA is a novel method to characterize, compare, and classify the way in which different sites and aquifers respond to recharge. As convolution models are developed for additional aquifers, they could contribute to an IRF database and a general classification system for karst aquifers.

  1. Semigroup characterization of Besov type Morrey spaces and well-posedness of generalized Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Lin, Chin-Cheng; Yang, Qixiang

    The well-posedness of generalized Navier-Stokes equations with initial data in some critical homogeneous Besov spaces and in some critical Q spaces was known. In this paper, we establish a wavelet characterization of Besov type Morrey spaces under the action of semigroup. As an application, we obtain the well-posedness of smooth solution for the generalized Navier-Stokes equations with initial data in some critical homogeneous Besov type Morrey spaces ( (1/2 ><β<1, γ1-γ2=1-2β), 1

  2. An Algebraic Approach to Unital Quantities and their Measurement

    NASA Astrophysics Data System (ADS)

    Domotor, Zoltan; Batitsky, Vadim

    2016-06-01

    The goals of this paper fall into two closely related areas. First, we develop a formal framework for deterministic unital quantities in which measurement unitization is understood to be a built-in feature of quantities rather than a mere annotation of their numerical values with convenient units. We introduce this idea within the setting of certain ordered semigroups of physical-geometric states of classical physical systems. States are assumed to serve as truth makers of metrological statements about quantity values. A unital quantity is presented as an isomorphism from the target system's ordered semigroup of states to that of positive reals. This framework allows us to include various derived and variable quantities, encountered in engineering and the natural sciences. For illustration and ease of presentation, we use the classical notions of length, time, electric current and mean velocity as primordial examples. The most important application of the resulting unital quantity calculus is in dimensional analysis. Second, in evaluating measurement uncertainty due to the analog-to-digital conversion of the measured quantity's value into its measuring instrument's pointer quantity value, we employ an ordered semigroup framework of pointer states. Pointer states encode the measuring instrument's indiscernibility relation, manifested by not being able to distinguish the measured system's topologically proximal states. Once again, we focus mainly on the measurement of length and electric current quantities as our motivating examples. Our approach to quantities and their measurement is strictly state-based and algebraic in flavor, rather than that of a representationalist-style structure-preserving numerical assignment.

  3. Two-phase deep convolutional neural network for reducing class skewness in histopathological images based breast cancer detection.

    PubMed

    Wahab, Noorul; Khan, Asifullah; Lee, Yeon Soo

    2017-06-01

    Different types of breast cancer are affecting lives of women across the world. Common types include Ductal carcinoma in situ (DCIS), Invasive ductal carcinoma (IDC), Tubular carcinoma, Medullary carcinoma, and Invasive lobular carcinoma (ILC). While detecting cancer, one important factor is mitotic count - showing how rapidly the cells are dividing. But the class imbalance problem, due to the small number of mitotic nuclei in comparison to the overwhelming number of non-mitotic nuclei, affects the performance of classification models. This work presents a two-phase model to mitigate the class biasness issue while classifying mitotic and non-mitotic nuclei in breast cancer histopathology images through a deep convolutional neural network (CNN). First, nuclei are segmented out using blue ratio and global binary thresholding. In Phase-1 a CNN is then trained on the segmented out 80×80 pixel patches based on a standard dataset. Hard non-mitotic examples are identified and augmented; mitotic examples are oversampled by rotation and flipping; whereas non-mitotic examples are undersampled by blue ratio histogram based k-means clustering. Based on this information from Phase-1, the dataset is modified for Phase-2 in order to reduce the effects of class imbalance. The proposed CNN architecture and data balancing technique yielded an F-measure of 0.79, and outperformed all the methods relying on specific handcrafted features, as well as those using a combination of handcrafted and CNN-generated features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Solutions to variational inequalities of parabolic type

    NASA Astrophysics Data System (ADS)

    Zhu, Yuanguo

    2006-09-01

    The existence of strong solutions to a kind of variational inequality of parabolic type is investigated by the theory of semigroups of linear operators. As an application, an abstract semi permeable media problem is studied.

  5. An approximation theory for the identification of linear thermoelastic systems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.; Su, Chien-Hua Frank

    1990-01-01

    An abstract approximation framework and convergence theory for the identification of thermoelastic systems is developed. Starting from an abstract operator formulation consisting of a coupled second order hyperbolic equation of elasticity and first order parabolic equation for heat conduction, well-posedness is established using linear semigroup theory in Hilbert space, and a class of parameter estimation problems is then defined involving mild solutions. The approximation framework is based upon generic Galerkin approximation of the mild solutions, and convergence of solutions of the resulting sequence of approximating finite dimensional parameter identification problems to a solution of the original infinite dimensional inverse problem is established using approximation results for operator semigroups. An example involving the basic equations of one dimensional linear thermoelasticity and a linear spline based scheme are discussed. Numerical results indicate how the approach might be used in a study of damping mechanisms in flexible structures.

  6. Noisy metrology: a saturable lower bound on quantum Fisher information

    NASA Astrophysics Data System (ADS)

    Yousefjani, R.; Salimi, S.; Khorashad, A. S.

    2017-06-01

    In order to provide a guaranteed precision and a more accurate judgement about the true value of the Cramér-Rao bound and its scaling behavior, an upper bound (equivalently a lower bound on the quantum Fisher information) for precision of estimation is introduced. Unlike the bounds previously introduced in the literature, the upper bound is saturable and yields a practical instruction to estimate the parameter through preparing the optimal initial state and optimal measurement. The bound is based on the underling dynamics, and its calculation is straightforward and requires only the matrix representation of the quantum maps responsible for encoding the parameter. This allows us to apply the bound to open quantum systems whose dynamics are described by either semigroup or non-semigroup maps. Reliability and efficiency of the method to predict the ultimate precision limit are demonstrated by three main examples.

  7. Image Augmentation for Object Image Classification Based On Combination of Pre-Trained CNN and SVM

    NASA Astrophysics Data System (ADS)

    Shima, Yoshihiro

    2018-04-01

    Neural networks are a powerful means of classifying object images. The proposed image category classification method for object images combines convolutional neural networks (CNNs) and support vector machines (SVMs). A pre-trained CNN, called Alex-Net, is used as a pattern-feature extractor. Alex-Net is pre-trained for the large-scale object-image dataset ImageNet. Instead of training, Alex-Net, pre-trained for ImageNet is used. An SVM is used as trainable classifier. The feature vectors are passed to the SVM from Alex-Net. The STL-10 dataset are used as object images. The number of classes is ten. Training and test samples are clearly split. STL-10 object images are trained by the SVM with data augmentation. We use the pattern transformation method with the cosine function. We also apply some augmentation method such as rotation, skewing and elastic distortion. By using the cosine function, the original patterns were left-justified, right-justified, top-justified, or bottom-justified. Patterns were also center-justified and enlarged. Test error rate is decreased by 0.435 percentage points from 16.055% by augmentation with cosine transformation. Error rates are increased by other augmentation method such as rotation, skewing and elastic distortion, compared without augmentation. Number of augmented data is 30 times that of the original STL-10 5K training samples. Experimental test error rate for the test 8k STL-10 object images was 15.620%, which shows that image augmentation is effective for image category classification.

  8. Attractors of equations of non-Newtonian fluid dynamics

    NASA Astrophysics Data System (ADS)

    Zvyagin, V. G.; Kondrat'ev, S. K.

    2014-10-01

    This survey describes a version of the trajectory-attractor method, which is applied to study the limit asymptotic behaviour of solutions of equations of non-Newtonian fluid dynamics. The trajectory-attractor method emerged in papers of the Russian mathematicians Vishik and Chepyzhov and the American mathematician Sell under the condition that the corresponding trajectory spaces be invariant under the translation semigroup. The need for such an approach was caused by the fact that for many equations of mathematical physics for which the Cauchy initial-value problem has a global (weak) solution with respect to the time, the uniqueness of such a solution has either not been established or does not hold. In particular, this is the case for equations of fluid dynamics. At the same time, trajectory spaces invariant under the translation semigroup could not be constructed for many equations of non-Newtonian fluid dynamics. In this connection, a different approach to the construction of trajectory attractors for dissipative systems was proposed in papers of Zvyagin and Vorotnikov without using invariance of trajectory spaces under the translation semigroup and is based on the topological lemma of Shura-Bura. This paper presents examples of equations of non-Newtonian fluid dynamics (the Jeffreys system describing movement of the Earth's crust, the model of motion of weak aqueous solutions of polymers, a system with memory) for which the aforementioned construction is used to prove the existence of attractors in both the autonomous and the non-autonomous cases. At the beginning of the paper there is also a brief exposition of the results of Ladyzhenskaya on the existence of attractors of the two-dimensional Navier-Stokes system and the result of Vishik and Chepyzhov for the case of attractors of the three-dimensional Navier-Stokes system. Bibliography: 34 titles.

  9. Roots and decompositions of three-dimensional topological objects

    NASA Astrophysics Data System (ADS)

    Matveev, Sergei V.

    2012-06-01

    In 1942 M.H.A. Newman formulated and proved a simple lemma of great importance for various fields of mathematics, including algebra and the theory of Gröbner-Shirshov bases. Later it was called the Diamond Lemma, since its key construction was illustrated by a diamond-shaped diagram. In 2005 the author suggested a new version of this lemma suitable for topological applications. This paper gives a survey of results on the existence and uniqueness of prime decompositions of various topological objects: three-dimensional manifolds, knots in thickened surfaces, knotted graphs, three-dimensional orbifolds, and knotted theta-curves in three-dimensional manifolds. As it turned out, all these topological objects admit a prime decomposition, although it is not unique in some cases (for example, in the case of orbifolds). For theta-curves and knots of geometric degree 1 in a thickened torus, the algebraic structure of the corresponding semigroups can be completely described. In both cases the semigroups are quotients of free groups by explicit commutation relations. Bibliography: 33 titles.

  10. Degenerate SDEs with singular drift and applications to Heisenberg groups

    NASA Astrophysics Data System (ADS)

    Huang, Xing; Wang, Feng-Yu

    2018-09-01

    By using the ultracontractivity of a reference diffusion semigroup, Krylov's estimate is established for a class of degenerate SDEs with singular drifts, which leads to existence and pathwise uniqueness by means of Zvonkin's transformation. The main result is applied to singular SDEs on generalized Heisenberg groups.

  11. On the Connectedness of Attractors for Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Gobbino, Massimo; Sardella, Mirko

    1997-01-01

    For a dynamical system on a connected metric spaceX, the global attractor (when it exists) is connected provided that either the semigroup is time-continuous orXis locally connected. Moreover, there exists an example of a dynamical system on a connected metric space which admits a disconnected global attractor.

  12. Uniform gradient estimates on manifolds with a boundary and applications

    NASA Astrophysics Data System (ADS)

    Cheng, Li-Juan; Thalmaier, Anton; Thompson, James

    2018-04-01

    We revisit the problem of obtaining uniform gradient estimates for Dirichlet and Neumann heat semigroups on Riemannian manifolds with boundary. As applications, we obtain isoperimetric inequalities, using Ledoux's argument, and uniform quantitative gradient estimates, firstly for C^2_b functions with boundary conditions and then for the unit spectral projection operators of Dirichlet and Neumann Laplacians.

  13. Entanglement-assisted quantum convolutional coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  14. Existence and Regularity of Invariant Measures for the Three Dimensional Stochastic Primitive Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glatt-Holtz, Nathan, E-mail: negh@vt.edu; Kukavica, Igor, E-mail: kukavica@usc.edu; Ziane, Mohammed, E-mail: ziane@usc.edu

    2014-05-15

    We establish the continuity of the Markovian semigroup associated with strong solutions of the stochastic 3D Primitive Equations, and prove the existence of an invariant measure. The proof is based on new moment bounds for strong solutions. The invariant measure is supported on strong solutions and is furthermore shown to have higher regularity properties.

  15. On a Mathematical Model with Noncompact Boundary Conditions Describing Bacterial Population

    NASA Astrophysics Data System (ADS)

    Boulanouar, Mohamed

    2013-04-01

    In this work, we are concerned with the well-posedness of a mathematical model describing a maturation-velocity structured bacterial population. Each bacterium is distinguished by its degree of maturity and its maturation velocity. The bacterial mitosis is mathematically described by noncompact boundary conditions. We show that the mathematical model is governed by a positive strongly continuous semigroup.

  16. Dynamical maps, quantum detailed balance, and the Petz recovery map

    NASA Astrophysics Data System (ADS)

    Alhambra, Álvaro M.; Woods, Mischa P.

    2017-08-01

    Markovian master equations (formally known as quantum dynamical semigroups) can be used to describe the evolution of a quantum state ρ when in contact with a memoryless thermal bath. This approach has had much success in describing the dynamics of real-life open quantum systems in the laboratory. Such dynamics increase the entropy of the state ρ and the bath until both systems reach thermal equilibrium, at which point entropy production stops. Our main result is to show that the entropy production at time t is bounded by the relative entropy between the original state and the state at time 2 t . The bound puts strong constraints on how quickly a state can thermalize, and we prove that the factor of 2 is tight. The proof makes use of a key physically relevant property of these dynamical semigroups, detailed balance, showing that this property is intimately connected with the field of recovery maps from quantum information theory. We envisage that the connections made here between the two fields will have further applications. We also use this connection to show that a similar relation can be derived when the fixed point is not thermal.

  17. Adaptation to Skew Distortions of Natural Scenes and Retinal Specificity of Its Aftereffects

    PubMed Central

    Habtegiorgis, Selam W.; Rifai, Katharina; Lappe, Markus; Wahl, Siegfried

    2017-01-01

    Image skew is one of the prominent distortions that exist in optical elements, such as in spectacle lenses. The present study evaluates adaptation to image skew in dynamic natural images. Moreover, the cortical levels involved in skew coding were probed using retinal specificity of skew adaptation aftereffects. Left and right skewed natural image sequences were shown to observers as adapting stimuli. The point of subjective equality (PSE), i.e., the skew amplitude in simple geometrical patterns that is perceived to be unskewed, was used to quantify the aftereffect of each adapting skew direction. The PSE, in a two-alternative forced choice paradigm, shifted toward the adapting skew direction. Moreover, significant adaptation aftereffects were obtained not only at adapted, but also at non-adapted retinal locations during fixation. Skew adaptation information was transferred partially to non-adapted retinal locations. Thus, adaptation to skewed natural scenes induces coordinated plasticity in lower and higher cortical areas of the visual pathway. PMID:28751870

  18. Estimating generalized skew of the log-Pearson Type III distribution for annual peak floods in Illinois

    USGS Publications Warehouse

    Oberg, Kevin A.; Mades, Dean M.

    1987-01-01

    Four techniques for estimating generalized skew in Illinois were evaluated: (1) a generalized skew map of the US; (2) an isoline map; (3) a prediction equation; and (4) a regional-mean skew. Peak-flow records at 730 gaging stations having 10 or more annual peaks were selected for computing station skews. Station skew values ranged from -3.55 to 2.95, with a mean of -0.11. Frequency curves computed for 30 gaging stations in Illinois using the variations of the regional-mean skew technique are similar to frequency curves computed using a skew map developed by the US Water Resources Council (WRC). Estimates of the 50-, 100-, and 500-yr floods computed for 29 of these gaging stations using the regional-mean skew techniques are within the 50% confidence limits of frequency curves computed using the WRC skew map. Although the three variations of the regional-mean skew technique were slightly more accurate than the WRC map, there is no appreciable difference between flood estimates computed using the variations of the regional-mean technique and flood estimates computed using the WRC skew map. (Peters-PTT)

  19. Howard University Symposium on Nonlinear Semigroups, Partial Differential Equations and Attractors (2nd) Held in Washington, D. C. on 3-7 August 1987.

    DTIC Science & Technology

    1987-09-30

    Optics" 9:15 - 10:00 a.m. Stuart Antman , University of Maryland "Asymptotics of Quasilinear Equations of Viscoelasticit 10:00 - 10:45 a.m. Jerome A... Antman 11. John Cannon Mathematics Department Dir of Math Science University of Maryland Office of Naval Research College Park, MD 20742 Arlington, VA

  20. Large Deviations for Stationary Probabilities of a Family of Continuous Time Markov Chains via Aubry-Mather Theory

    NASA Astrophysics Data System (ADS)

    Lopes, Artur O.; Neumann, Adriana

    2015-05-01

    In the present paper, we consider a family of continuous time symmetric random walks indexed by , . For each the matching random walk take values in the finite set of states ; notice that is a subset of , where is the unitary circle. The infinitesimal generator of such chain is denoted by . The stationary probability for such process converges to the uniform distribution on the circle, when . Here we want to study other natural measures, obtained via a limit on , that are concentrated on some points of . We will disturb this process by a potential and study for each the perturbed stationary measures of this new process when . We disturb the system considering a fixed potential and we will denote by the restriction of to . Then, we define a non-stochastic semigroup generated by the matrix , where is the infinifesimal generator of . From the continuous time Perron's Theorem one can normalized such semigroup, and, then we get another stochastic semigroup which generates a continuous time Markov Chain taking values on . This new chain is called the continuous time Gibbs state associated to the potential , see (Lopes et al. in J Stat Phys 152:894-933, 2013). The stationary probability vector for such Markov Chain is denoted by . We assume that the maximum of is attained in a unique point of , and from this will follow that . Thus, here, our main goal is to analyze the large deviation principle for the family , when . The deviation function , which is defined on , will be obtained from a procedure based on fixed points of the Lax-Oleinik operator and Aubry-Mather theory. In order to obtain the associated Lax-Oleinik operator we use the Varadhan's Lemma for the process . For a careful analysis of the problem we present full details of the proof of the Large Deviation Principle, in the Skorohod space, for such family of Markov Chains, when . Finally, we compute the entropy of the invariant probabilities on the Skorohod space associated to the Markov Chains we analyze.

  1. Tweaked residual convolutional network for face alignment

    NASA Astrophysics Data System (ADS)

    Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu

    2017-08-01

    We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.

  2. The Affective Impact of Financial Skewness on Neural Activity and Choice

    PubMed Central

    Wu, Charlene C.; Bossaerts, Peter; Knutson, Brian

    2011-01-01

    Few finance theories consider the influence of “skewness” (or large and asymmetric but unlikely outcomes) on financial choice. We investigated the impact of skewed gambles on subjects' neural activity, self-reported affective responses, and subsequent preferences using functional magnetic resonance imaging (FMRI). Neurally, skewed gambles elicited more anterior insula activation than symmetric gambles equated for expected value and variance, and positively skewed gambles also specifically elicited more nucleus accumbens (NAcc) activation than negatively skewed gambles. Affectively, positively skewed gambles elicited more positive arousal and negatively skewed gambles elicited more negative arousal than symmetric gambles equated for expected value and variance. Subjects also preferred positively skewed gambles more, but negatively skewed gambles less than symmetric gambles of equal expected value. Individual differences in both NAcc activity and positive arousal predicted preferences for positively skewed gambles. These findings support an anticipatory affect account in which statistical properties of gambles—including skewness—can influence neural activity, affective responses, and ultimately, choice. PMID:21347239

  3. Skewed steel bridges, part ii : cross-frame and connection design to ensure brace effectiveness : technical summary.

    DOT National Transportation Integrated Search

    2017-08-01

    Skewed bridges in Kansas are often designed such that the cross-frames are carried parallel to the skew angle up to 40, while many other states place cross-frames perpendicular to the girder for skew angles greater than 20. Skewed-parallel cross-...

  4. Skewed steel bridges, part ii : cross-frame and connection design to ensure brace effectiveness : final report.

    DOT National Transportation Integrated Search

    2017-08-01

    Skewed bridges in Kansas are often designed such that the cross-frames are carried parallel to the skew angle up to 40, while many other states place cross-frames perpendicular to the girder for skew angles greater than 20. Skewed-parallel cross-...

  5. Portfolio optimization with skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Lam, Weng Hoe; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-04-01

    Mean and variance of return distributions are two important parameters of the mean-variance model in portfolio optimization. However, the mean-variance model will become inadequate if the returns of assets are not normally distributed. Therefore, higher moments such as skewness and kurtosis cannot be ignored. Risk averse investors prefer portfolios with high skewness and low kurtosis so that the probability of getting negative rates of return will be reduced. The objective of this study is to compare the portfolio compositions as well as performances between the mean-variance model and mean-variance-skewness-kurtosis model by using the polynomial goal programming approach. The results show that the incorporation of skewness and kurtosis will change the optimal portfolio compositions. The mean-variance-skewness-kurtosis model outperforms the mean-variance model because the mean-variance-skewness-kurtosis model takes skewness and kurtosis into consideration. Therefore, the mean-variance-skewness-kurtosis model is more appropriate for the investors of Malaysia in portfolio optimization.

  6. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  7. Dynamic Modeling from Flight Data with Unknown Time Skews

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2016-01-01

    A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.

  8. Increased skewing of X chromosome inactivation in Rett syndrome patients and their mothers.

    PubMed

    Knudsen, Gun Peggy S; Neilson, Tracey C S; Pedersen, June; Kerr, Alison; Schwartz, Marianne; Hulten, Maj; Bailey, Mark E S; Orstavik, Karen Helene

    2006-11-01

    Rett syndrome is a largely sporadic, X-linked neurological disorder with a characteristic phenotype, but which exhibits substantial phenotypic variability. This variability has been partly attributed to an effect of X chromosome inactivation (XCI). There have been conflicting reports regarding incidence of skewed X inactivation in Rett syndrome. In rare familial cases of Rett syndrome, favourably skewed X inactivation has been found in phenotypically normal carrier mothers. We have investigated the X inactivation pattern in DNA from blood and buccal cells of sporadic Rett patients (n=96) and their mothers (n=84). The mean degree of skewing in blood was higher in patients (70.7%) than controls (64.9%). Unexpectedly, the mothers of these patients also had a higher mean degree of skewing in blood (70.8%) than controls. In accordance with these findings, the frequency of skewed (XCI > or =80%) X inactivation in blood was also higher in both patients (25%) and mothers (30%) than in controls (11%). To test whether the Rett patients with skewed X inactivation were daughters of skewed mothers, 49 mother-daughter pairs were analysed. Of 14 patients with skewed X inactivation, only three had a mother with skewed X inactivation. Among patients, mildly affected cases were shown to be more skewed than more severely affected cases, and there was a trend towards preferential inactivation of the paternally inherited X chromosome in skewed cases. These findings, particularly the greater degree of X inactivation skewing in Rett syndrome patients, are of potential significance in the analysis of genotype-phenotype correlations in Rett syndrome.

  9. Sex differences in the drivers of reproductive skew in a cooperative breeder.

    PubMed

    Nelson-Flower, Martha J; Flower, Tom P; Ridley, Amanda R

    2018-04-16

    Many cooperatively breeding societies are characterized by high reproductive skew, such that some socially dominant individuals breed, while socially subordinate individuals provide help. Inbreeding avoidance serves as a source of reproductive skew in many high-skew societies, but few empirical studies have examined sources of skew operating alongside inbreeding avoidance or compared individual attempts to reproduce (reproductive competition) with individual reproductive success. Here, we use long-term genetic and observational data to examine factors affecting reproductive skew in the high-skew cooperatively breeding southern pied babbler (Turdoides bicolor). When subordinates can breed, skew remains high, suggesting factors additional to inbreeding avoidance drive skew. Subordinate females are more likely to compete to breed when older or when ecological constraints on dispersal are high, but heavy subordinate females are more likely to successfully breed. Subordinate males are more likely to compete when they are older, during high ecological constraints, or when they are related to the dominant male, but only the presence of within-group unrelated subordinate females predicts subordinate male breeding success. Reproductive skew is not driven by reproductive effort, but by forces such as intrinsic physical limitations and intrasexual conflict (for females) or female mate choice, male mate-guarding and potentially reproductive restraint (for males). Ecological conditions or "outside options" affect the occurrence of reproductive conflict, supporting predictions of recent synthetic skew models. Inbreeding avoidance together with competition for access to reproduction may generate high skew in animal societies, and disparate processes may be operating to maintain male vs. female reproductive skew in the same species. © 2018 John Wiley & Sons Ltd.

  10. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  11. An approximation theory for nonlinear partial differential equations with applications to identification and control

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Kunisch, K.

    1982-01-01

    Approximation results from linear semigroup theory are used to develop a general framework for convergence of approximation schemes in parameter estimation and optimal control problems for nonlinear partial differential equations. These ideas are used to establish theoretical convergence results for parameter identification using modal (eigenfunction) approximation techniques. Results from numerical investigations of these schemes for both hyperbolic and parabolic systems are given.

  12. Hybrid excited claw pole generator with skewed and non-skewed permanent magnets

    NASA Astrophysics Data System (ADS)

    Wardach, Marcin

    2017-12-01

    This article contains simulation results of the Hybrid Excited Claw Pole Generator with skewed and non-skewed permanent magnets on rotor. The experimental machine has claw poles on two rotor sections, between which an excitation control coil is located. The novelty of this machine is existence of non-skewed permanent magnets on claws of one part of the rotor and skewed permanent magnets on the second one. The paper presents the construction of the machine and analysis of the influence of the PM skewing on the cogging torque and back-emf. Simulation studies enabled the determination of the cogging torque and the back-emf rms for both: the strengthening and the weakening of magnetic field. The influence of the magnets skewing on the cogging torque and the back-emf rms have also been analyzed.

  13. On the Effects of Wind Turbine Wake Skew Caused by Wind Veer: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, Matthew J; Sirnivas, Senu

    Because of Coriolis forces caused by the Earth's rotation, the structure of the atmospheric boundary layer often contains wind-direction change with height, also known as wind-direction veer. Under low turbulence conditions, such as in stably stratified atmospheric conditions, this veer can be significant, even across the vertical extent of a wind turbine's rotor disk. The veer then causes the wind turbine wake to skew as it advects downstream. This wake skew has been observed both experimentally and numerically. In this work, we attempt to examine the wake skewing process in some detail, and quantify how differently a skewed wake versusmore » a non skewed wake affects a downstream turbine. We do this by performing atmospheric large-eddy simulations to create turbulent inflow winds with and without veer. In the veer case, there is a roughly 8 degree wind direction change across the turbine rotor. We then perform subsequent large-eddy simulations using these inflow data with an actuator line rotor model to create wakes. The turbine modeled is a large, modern, offshore, multimegawatt turbine. We examine the unsteady wake data in detail and show that the skewed wake recovers faster than the non skewed wake. We also show that the wake deficit does not skew to the same degree that a passive tracer would if subject to veered inflow. Last, we use the wake data to place a hypothetical turbine 9 rotor diameters downstream by running aeroelastic simulations with the simulated wake data. We see differences in power and loads if this downstream turbine is subject to a skewed or non skewed wake. We feel that the differences observed between the skewed and nonskewed wake are important enough that the skewing effect should be included in engineering wake models.« less

  14. On the Effects of Wind Turbine Wake Skew Caused by Wind Veer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, Matthew J; Sirnivas, Senu

    Because of Coriolis forces caused by the Earth's rotation, the structure of the atmospheric boundary layer often contains wind-direction change with height, also known as wind-direction veer. Under low turbulence conditions, such as in stably stratified atmospheric conditions, this veer can be significant, even across the vertical extent of a wind turbine's rotor disk. The veer then causes the wind turbine wake to skew as it advects downstream. This wake skew has been observed both experimentally and numerically. In this work, we attempt to examine the wake skewing process in some detail, and quantify how differently a skewed wake versusmore » a non skewed wake affects a downstream turbine. We do this by performing atmospheric large-eddy simulations to create turbulent inflow winds with and without veer. In the veer case, there is a roughly 8 degree wind direction change across the turbine rotor. We then perform subsequent large-eddy simulations using these inflow data with an actuator line rotor model to create wakes. The turbine modeled is a large, modern, offshore, multimegawatt turbine. We examine the unsteady wake data in detail and show that the skewed wake recovers faster than the non skewed wake. We also show that the wake deficit does not skew to the same degree that a passive tracer would if subject to veered inflow. Last, we use the wake data to place a hypothetical turbine 9 rotor diameters downstream by running aeroelastic simulations with the simulated wake data. We see differences in power and loads if this downstream turbine is subject to a skewed or non skewed wake. We feel that the differences observed between the skewed and nonskewed wake are important enough that the skewing effect should be included in engineering wake models.« less

  15. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  16. Improving energy efficiency in handheld biometric applications

    NASA Astrophysics Data System (ADS)

    Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.

    2012-06-01

    With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.

  17. Network Skewness Measures Resilience in Lake Ecosystems

    NASA Astrophysics Data System (ADS)

    Langdon, P. G.; Wang, R.; Dearing, J.; Zhang, E.; Doncaster, P.; Yang, X.; Yang, H.; Dong, X.; Hu, Z.; Xu, M.; Yanjie, Z.; Shen, J.

    2017-12-01

    Changes in ecosystem resilience defy straightforward quantification from biodiversity metrics, which ignore influences of community structure. Naturally self-organized network structures show positive skewness in the distribution of node connections. Here we test for skewness reduction in lake diatom communities facing anthropogenic stressors, across a network of 273 lakes in China containing 452 diatom species. Species connections show positively skewed distributions in little-impacted lakes, switching to negative skewness in lakes associated with human settlement, surrounding land-use change, and higher phosphorus concentration. Dated sediment cores reveal a down-shifting of network skewness as human impacts intensify, and reversal with recovery from disturbance. The appearance and degree of negative skew presents a new diagnostic for quantifying system resilience and impacts from exogenous forcing on ecosystem communities.

  18. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    NASA Astrophysics Data System (ADS)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  19. Continuity diaphragm for skewed continuous span precast prestressed concrete girder bridges.

    DOT National Transportation Integrated Search

    2004-10-01

    Continuity diaphragms used on skewed bents in prestressed girder bridges cause difficulties in detailing and : construction. Details for bridges with large diaphragm skew angles (>30) have not been a problem for LA DOTD. : However, as the skew angl...

  20. Nonlinear Semigroup for Controlled Partially Observed Diffusions.

    DTIC Science & Technology

    1980-08-21

    REPOTDT Air Force Office of Scientific Research /A-’/7/ Bolling Air Force Base T] DUMER OF PAGES 6 DITRSUIO STATEMENT CLASf thif Report)ort Approved for...block number) In this papaer a "separated"t control problem associated with controlled, LLJ partially observed diffusion processes is considered. The...of Applied Mathematics Brown University Providence, Rhode Island 02912 August 21, 1980 +This research was supported in part by the Air Force Office of

  1. Existence and energy decay of a nonuniform Timoshenko system with second sound

    NASA Astrophysics Data System (ADS)

    Hamadouche, Taklit; Messaoudi, Salim A.

    2018-02-01

    In this paper, we consider a linear thermoelastic Timoshenko system with variable physical parameters, where the heat conduction is given by Cattaneo's law and the coupling is via the displacement equation. We discuss the well-posedness and the regularity of solution using the semigroup theory. Moreover, we establish the exponential decay result provided that the stability function χ r(x)=0. Otherwise, we show that the solution decays polynomially.

  2. Legendre-Tau approximation for functional differential equations. Part 3: Eigenvalue approximations and uniform stability

    NASA Technical Reports Server (NTRS)

    Ito, K.

    1984-01-01

    The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A charactristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.

  3. Advanced Numerical Methods for Computing Statistical Quantities of Interest from Solutions of SPDES

    DTIC Science & Technology

    2012-01-19

    and related optimization problems; developing numerical methods for option pricing problems in the presence of random arbitrage return. 1. Novel...equations (BSDEs) are connected to nonlinear partial differen- tial equations and non-linear semigroups, to the theory of hedging and pricing of contingent...the presence of random arbitrage return [3] We consider option pricing problems when we relax the condition of no arbitrage in the Black- Scholes

  4. Deep multi-scale convolutional neural network for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  5. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  6. The effect of forward skewed rotor blades on aerodynamic and aeroacoustic performance of axial-flow fan

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Zhong, Fangyuan

    Based on comparative experiment, this paper deals with using tangentially skewed rotor blades in axial-flow fan. It is seen from the comparison of the overall performance of the fan with skewed bladed rotor and radial bladed rotor that the skewed blades operate more efficiently than the radial blades, especially at low volume flows. Meanwhile, decrease in pressure rise and flow rate of axial-flow fan with skewed rotor blades is found. The rotor-stator interaction noise and broadband noise of axial-flow fan are reduced with skewed rotor blades. Forward skewed blades tend to reduce the accumulation of the blade boundary layer in the tip region resulting from the effect of centrifugal forces. The turning of streamlines from the outer radius region into inner radius region in blade passages due to the radial component of blade forces of skewed blades is the main reason for the decrease in pressure rise and flow rate.

  7. DNA Asymmetric Strand Bias Affects the Amino Acid Composition of Mitochondrial Proteins

    PubMed Central

    Min, Xiang Jia; Hickey, Donal A.

    2007-01-01

    Abstract Variations in GC content between genomes have been extensively documented. Genomes with comparable GC contents can, however, still differ in the apportionment of the G and C nucleotides between the two DNA strands. This asymmetric strand bias is known as GC skew. Here, we have investigated the impact of differences in nucleotide skew on the amino acid composition of the encoded proteins. We compared orthologous genes between animal mitochondrial genomes that show large differences in GC and AT skews. Specifically, we compared the mitochondrial genomes of mammals, which are characterized by a negative GC skew and a positive AT skew, to those of flatworms, which show the opposite skews for both GC and AT base pairs. We found that the mammalian proteins are highly enriched in amino acids encoded by CA-rich codons (as predicted by their negative GC and positive AT skews), whereas their flatworm orthologs were enriched in amino acids encoded by GT-rich codons (also as predicted from their skews). We found that these differences in mitochondrial strand asymmetry (measured as GC and AT skews) can have very large, predictable effects on the composition of the encoded proteins. PMID:17974594

  8. Selection on skewed characters and the paradox of stasis

    PubMed Central

    Bonamour, Suzanne; Teplitsky, Céline; Charmantier, Anne; Crochet, Pierre-André; Chevin, Luis-Miguel

    2018-01-01

    Observed phenotypic responses to selection in the wild often differ from predictions based on measurements of selection and genetic variance. An overlooked hypothesis to explain this paradox of stasis is that a skewed phenotypic distribution affects natural selection and evolution. We show through mathematical modelling that, when a trait selected for an optimum phenotype has a skewed distribution, directional selection is detected even at evolutionary equilibrium, where it causes no change in the mean phenotype. When environmental effects are skewed, Lande and Arnold’s (1983) directional gradient is in the direction opposite to the skew. In contrast, skewed breeding values can displace the mean phenotype from the optimum, causing directional selection in the direction of the skew. These effects can be partitioned out using alternative selection estimates based on average derivatives of individual relative fitness, or additive genetic covariances between relative fitness and trait (Robertson-Price identity). We assess the validity of these predictions using simulations of selection estimation under moderate samples size. Ecologically relevant traits may commonly have skewed distributions, as we here exemplify with avian laying date – repeatedly described as more evolutionarily stable than expected –, so this skewness should be accounted for when investigating evolutionary dynamics in the wild. PMID:28921508

  9. Plasma Electrolyte Distributions in Humans-Normal or Skewed?

    PubMed

    Feldman, Mark; Dickson, Beverly

    2017-11-01

    It is widely believed that plasma electrolyte levels are normally distributed. Statistical tests and calculations using plasma electrolyte data are often reported based on this assumption of normality. Examples include t tests, analysis of variance, correlations and confidence intervals. The purpose of our study was to determine whether plasma sodium (Na + ), potassium (K + ), chloride (Cl - ) and bicarbonate [Formula: see text] distributions are indeed normally distributed. We analyzed plasma electrolyte data from 237 consecutive adults (137 women and 100 men) who had normal results on a standard basic metabolic panel which included plasma electrolyte measurements. The skewness of each distribution (as a measure of its asymmetry) was compared to the zero skewness of a normal (Gaussian) distribution. The plasma Na + distribution was skewed slightly to the right, but the skew was not significantly different from zero skew. The plasma Cl - distribution was skewed slightly to the left, but again the skew was not significantly different from zero skew. On the contrary, both the plasma K + and [Formula: see text] distributions were significantly skewed to the right (P < 0.01 zero skew). There was also a suggestion from examining frequency distribution curves that K + and [Formula: see text] distributions were bimodal. In adults with a normal basic metabolic panel, plasma potassium and bicarbonate levels are not normally distributed and may be bimodal. Thus, statistical methods to evaluate these 2 plasma electrolytes should be nonparametric tests and not parametric ones that require a normal distribution. Copyright © 2017 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.

  10. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  11. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    PubMed

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  12. Performance of Serially Concatenated Convolutional Codes with Binary Modulation in AWGN and Noise Jamming over Rayleigh Fading Channels

    DTIC Science & Technology

    2001-09-01

    Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes

  13. DSN telemetry system performance with convolutionally coded data using operational maximum-likelihood convolutional decoders

    NASA Technical Reports Server (NTRS)

    Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.

    1976-01-01

    The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.

  14. Approximate controllability of a system of parabolic equations with delay

    NASA Astrophysics Data System (ADS)

    Carrasco, Alexander; Leiva, Hugo

    2008-09-01

    In this paper we give necessary and sufficient conditions for the approximate controllability of the following system of parabolic equations with delay: where [Omega] is a bounded domain in , D is an n×n nondiagonal matrix whose eigenvalues are semi-simple with nonnegative real part, the control and B[set membership, variant]L(U,Z) with , . The standard notation zt(x) defines a function from [-[tau],0] to (with x fixed) by zt(x)(s)=z(t+s,x), -[tau][less-than-or-equals, slant]s[less-than-or-equals, slant]0. Here [tau][greater-or-equal, slanted]0 is the maximum delay, which is supposed to be finite. We assume that the operator is linear and bounded, and [phi]0[set membership, variant]Z, [phi][set membership, variant]L2([-[tau],0];Z). To this end: First, we reformulate this system into a standard first-order delay equation. Secondly, the semigroup associated with the first-order delay equation on an appropriate product space is expressed as a series of strongly continuous semigroups and orthogonal projections related with the eigenvalues of the Laplacian operator (); this representation allows us to reduce the controllability of this partial differential equation with delay to a family of ordinary delay equations. Finally, we use the well-known result on the rank condition for the approximate controllability of delay system to derive our main result.

  15. Selection on skewed characters and the paradox of stasis.

    PubMed

    Bonamour, Suzanne; Teplitsky, Céline; Charmantier, Anne; Crochet, Pierre-André; Chevin, Luis-Miguel

    2017-11-01

    Observed phenotypic responses to selection in the wild often differ from predictions based on measurements of selection and genetic variance. An overlooked hypothesis to explain this paradox of stasis is that a skewed phenotypic distribution affects natural selection and evolution. We show through mathematical modeling that, when a trait selected for an optimum phenotype has a skewed distribution, directional selection is detected even at evolutionary equilibrium, where it causes no change in the mean phenotype. When environmental effects are skewed, Lande and Arnold's (1983) directional gradient is in the direction opposite to the skew. In contrast, skewed breeding values can displace the mean phenotype from the optimum, causing directional selection in the direction of the skew. These effects can be partitioned out using alternative selection estimates based on average derivatives of individual relative fitness, or additive genetic covariances between relative fitness and trait (Robertson-Price identity). We assess the validity of these predictions using simulations of selection estimation under moderate sample sizes. Ecologically relevant traits may commonly have skewed distributions, as we here exemplify with avian laying date - repeatedly described as more evolutionarily stable than expected - so this skewness should be accounted for when investigating evolutionary dynamics in the wild. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  16. Reward skewness coding in the insula independent of probability and loss

    PubMed Central

    Tobler, Philippe N.

    2011-01-01

    Rewards in the natural environment are rarely predicted with complete certainty. Uncertainty relating to future rewards has typically been defined as the variance of the potential outcomes. However, the asymmetry of predicted reward distributions, known as skewness, constitutes a distinct but neuroscientifically underexplored risk term that may also have an impact on preference. By changing only reward magnitudes, we study skewness processing in equiprobable ternary lotteries involving only gains and constant probabilities, thus excluding probability distortion or loss aversion as mechanisms for skewness preference formation. We show that individual preferences are sensitive to not only the mean and variance but also to the skewness of predicted reward distributions. Using neuroimaging, we show that the insula, a structure previously implicated in the processing of reward-related uncertainty, responds to the skewness of predicted reward distributions. Some insula responses increased in a monotonic fashion with skewness (irrespective of individual skewness preferences), whereas others were similarly elevated to both negative and positive as opposed to no reward skew. These data support the notion that the asymmetry of reward distributions is processed in the brain and, taken together with replicated findings of mean coding in the striatum and variance coding in the cingulate, suggest that the brain codes distinct aspects of reward distributions in a distributed fashion. PMID:21849610

  17. No evidence that skewing of X chromosome inactivation patterns is transmitted to offspring in humans

    PubMed Central

    Bolduc, Véronique; Chagnon, Pierre; Provost, Sylvie; Dubé, Marie-Pierre; Belisle, Claude; Gingras, Marianne; Mollica, Luigina; Busque, Lambert

    2007-01-01

    Skewing of X chromosome inactivation (XCI) can occur in normal females and increases in tissues with age. The mechanisms underlying skewing in normal females, however, remain controversial. To better understand the phenomenon of XCI in nondisease states, we evaluated XCI patterns in epithelial and hematopoietic cells of over 500 healthy female mother-neonate pairs. The incidence of skewing observed in mothers was twice that observed in neonates, and in both cohorts, the incidence of XCI was lower in epithelial cells than hematopoietic cells. These results suggest that XCI incidence varies by tissue type and that age-dependent mechanisms can influence skewing in both epithelial and hematopoietic cells. In both cohorts, a correlation was identified in the direction of skewing in epithelial and hematopoietic cells, suggesting common underlying skewing mechanisms across tissues. However, there was no correlation between the XCI patterns of mothers and their respective neonates, and skewed mothers gave birth to skewed neonates at the same frequency as nonskewed mothers. Taken together, our data suggest that in humans, the XCI pattern observed at birth does not reflect a single heritable genetic locus, but rather corresponds to a complex trait determined, at least in part, by selection biases occurring after XCI. PMID:18097474

  18. Enhanced online convolutional neural networks for object tracking

    NASA Astrophysics Data System (ADS)

    Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen

    2018-04-01

    In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.

  19. Cellular basis of gastrulation in the sand dollar Scaphechinus mirabilis.

    PubMed

    Kominami, T; Takata, H

    2000-12-01

    The processes of gastrulation in the sand dollar Scaphechinus mirabilis are quite different from those in regular echinoids. In this study, we explored the cellular basis of gastrulation in this species with several methods. Cell-tracing experiments revealed that the prospective endodermal cells were convoluted throughout the invagination processes. Histological observation showed that the ectodermal layer remained thickened, and the vegetal cells retained an elongated shape until the last step of invagination. Further, most of the vegetal ectodermal cells were skewed or distorted. Wedge-shaped cells were common in the vegetal ectoderm, especially at the subequatorial region. In these embryos, unlike the embryos of regular echinoids, secondary mesenchyme cells did not seem to exert the force to pull up the archenteron toward the inner surface of the apical plate. In fact, the archenteron cells were not stretched along the axis of elongation and were in close contact with each other. Here we found that gastrulation was completely blocked when the embryos were attached to a glass dish coated with poly-L-lysine, in which the movement of the ectodermal layer was inhibited. These results suggest that a force generated by the thickened ectoderm, rather than rearrangement of the archenteron cells, may play a key role in the archenteron elongation in S. mirabilis embryos.

  20. US-SOMO HPLC-SAXS module: dealing with capillary fouling and extraction of pure component patterns from poorly resolved SEC-SAXS data

    PubMed Central

    Brookes, Emre; Vachette, Patrice; Rocco, Mattia; Pérez, Javier

    2016-01-01

    Size-exclusion chromatography coupled with SAXS (small-angle X-ray scattering), often performed using a flow-through capillary, should allow direct collection of monodisperse sample data. However, capillary fouling issues and non-baseline-resolved peaks can hamper its efficacy. The UltraScan solution modeler (US-SOMO) HPLC-SAXS (high-performance liquid chromatography coupled with SAXS) module provides a comprehensive framework to analyze such data, starting with a simple linear baseline correction and symmetrical Gaussian decomposition tools [Brookes, Pérez, Cardinali, Profumo, Vachette & Rocco (2013 ▸). J. Appl. Cryst. 46, 1823–1833]. In addition to several new features, substantial improvements to both routines have now been implemented, comprising the evaluation of outcomes by advanced statistical tools. The novel integral baseline-correction procedure is based on the more sound assumption that the effect of capillary fouling on scattering increases monotonically with the intensity scattered by the material within the X-ray beam. Overlapping peaks, often skewed because of sample interaction with the column matrix, can now be accurately decomposed using non-symmetrical modified Gaussian functions. As an example, the case of a polydisperse solution of aldolase is analyzed: from heavily convoluted peaks, individual SAXS profiles of tetramers, octamers and dodecamers are extracted and reliably modeled. PMID:27738419

  1. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  2. Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Hunter, Craig A.

    1999-01-01

    An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.

  3. Experimental study of current loss and plasma formation in the Z machine post-hole convolute

    NASA Astrophysics Data System (ADS)

    Gomez, M. R.; Gilgenbach, R. M.; Cuneo, M. E.; Jennings, C. A.; McBride, R. D.; Waisman, E. M.; Hutsel, B. T.; Stygar, W. A.; Rose, D. V.; Maron, Y.

    2017-01-01

    The Z pulsed-power generator at Sandia National Laboratories drives high energy density physics experiments with load currents of up to 26 MA. Z utilizes a double post-hole convolute to combine the current from four parallel magnetically insulated transmission lines into a single transmission line just upstream of the load. Current loss is observed in most experiments and is traditionally attributed to inefficient convolute performance. The apparent loss current varies substantially for z-pinch loads with different inductance histories; however, a similar convolute impedance history is observed for all load types. This paper details direct spectroscopic measurements of plasma density, temperature, and apparent and actual plasma closure velocities within the convolute. Spectral measurements indicate a correlation between impedance collapse and plasma formation in the convolute. Absorption features in the spectra show the convolute plasma consists primarily of hydrogen, which likely forms from desorbed electrode contaminant species such as H2O , H2 , and hydrocarbons. Plasma densities increase from 1 ×1016 cm-3 (level of detectability) just before peak current to over 1 ×1017 cm-3 at stagnation (tens of ns later). The density seems to be highest near the cathode surface, with an apparent cathode to anode plasma velocity in the range of 35 - 50 cm /μ s . Similar plasma conditions and convolute impedance histories are observed in experiments with high and low losses, suggesting that losses are driven largely by load dynamics, which determine the voltage on the convolute.

  4. Utility functions predict variance and skewness risk preferences in monkeys

    PubMed Central

    Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram

    2016-01-01

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743

  5. Utility functions predict variance and skewness risk preferences in monkeys.

    PubMed

    Genest, Wilfried; Stauffer, William R; Schultz, Wolfram

    2016-07-26

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.

  6. Keypoint Density-Based Region Proposal for Fine-Grained Object Detection and Classification Using Regions with Convolutional Neural Network Features

    DTIC Science & Technology

    2015-12-15

    Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network ... Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their...detection accuracy and speed on the fine-grained Caltech UCSD bird dataset (Wah et al., 2011). Recently, Convolutional Neural Networks (CNNs), a deep

  7. Investigating the Investigative Task: Testing for Skewness--An Investigation of Different Test Statistics and Their Power to Detect Skewness

    ERIC Educational Resources Information Center

    Tabor, Josh

    2010-01-01

    On the 2009 AP[c] Statistics Exam, students were asked to create a statistic to measure skewness in a distribution. This paper explores several of the most popular student responses and evaluates which statistic performs best when sampling from various skewed populations. (Contains 8 figures, 3 tables, and 4 footnotes.)

  8. Social and genetic structure of paper wasp cofoundress associations: tests of reproductive skew models.

    PubMed

    Field, J; Solís, C R; Queller, D C; Strassmann, J E

    1998-06-01

    Recent models postulate that the members of a social group assess their ecological and social environments and agree a "social contract" of reproductive partitioning (skew). We tested social contracts theory by using DNA microsatellites to measure skew in 24 cofoundress associations of paper wasps, Polistes bellicosus. In contrast to theoretical predictions, there was little variation in cofoundress relatedness, and relatedness either did not predict skew or was negatively correlated with it; the dominant/subordinate size ratio, assumed to reflect relative fighting ability, did not predict skew; and high skew was associated with decreased aggression by the rank 2 subordinate toward the dominant. High skew was associated with increased group size. A difficulty with measuring skew in real systems is the frequent changes in group composition that commonly occur in social animals. In P. bellicosus, 61% of egg layers and an unknown number of non-egg layers were absent by the time nests were collected. The social contracts models provide an attractive general framework linking genetics, ecology, and behavior, but there have been few direct tests of their predictions. We question assumptions underlying the models and suggest directions for future research.

  9. Metric adjusted skew information

    PubMed Central

    Hansen, Frank

    2008-01-01

    We extend the concept of Wigner–Yanase–Dyson skew information to something we call “metric adjusted skew information” (of a state with respect to a conserved observable). This “skew information” is intended to be a non-negative quantity bounded by the variance (of an observable in a state) that vanishes for observables commuting with the state. We show that the skew information is a convex function on the manifold of states. It also satisfies other requirements, proposed by Wigner and Yanase, for an effective measure-of-information content of a state relative to a conserved observable. We establish a connection between the geometrical formulation of quantum statistics as proposed by Chentsov and Morozova and measures of quantum information as introduced by Wigner and Yanase and extended in this article. We show that the set of normalized Morozova–Chentsov functions describing the possible quantum statistics is a Bauer simplex and determine its extreme points. We determine a particularly simple skew information, the “λ-skew information,” parametrized by a λ ∈ (0, 1], and show that the convex cone this family generates coincides with the set of all metric adjusted skew informations. PMID:18635683

  10. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    PubMed

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. The effect of different intensity measures and earthquake directions on the seismic assessment of skewed highway bridges

    NASA Astrophysics Data System (ADS)

    Bayat, M.; Daneshjoo, F.; Nisticò, N.

    2017-01-01

    In this study the probable seismic behavior of skewed bridges with continuous decks under earthquake excitations from different directions is investigated. A 45° skewed bridge is studied. A suite of 20 records is used to perform an Incremental Dynamic Analysis (IDA) for fragility curves. Four different earthquake directions have been considered: -45°, 0°, 22.5°, 45°. A sensitivity analysis on different spectral intensity meas ures is presented; efficiency and practicality of different intensity measures have been studied. The fragility curves obtained indicate that the critical direction for skewed bridges is the skew direction as well as the longitudinal direction. The study shows the importance of finding the most critical earthquake in understanding and predicting the behavior of skewed bridges.

  12. System and method for adaptively deskewing parallel data signals relative to a clock

    DOEpatents

    Jenkins, Philip Nord; Cornett, Frank N.

    2006-04-18

    A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. Each of the plurality of delayed signals is compared to a reference signal to detect changes in the skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in the detected skew.

  13. Local existence of N=1 supersymmetric gauge theory in four Dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akbar, Fiki T.; Gunara, Bobby E.; Zen, Freddy P.

    2015-04-16

    In this paper, we shall prove the local existence of N=1 supersymmetry gauge theory in 4 dimension. We start from the Lagrangian for coupling chiral and vector multiplets with constant gauge kinetic function and only considering a bosonic part by setting all fermionic field to be zero at level equation of motion. We consider a U(n) model as isometry for scalar field internal geometry. And we use a nonlinear semigroup method to prove the local existence.

  14. Fractional Number Operator and Associated Fractional Diffusion Equations

    NASA Astrophysics Data System (ADS)

    Rguigui, Hafedh

    2018-03-01

    In this paper, we study the fractional number operator as an analog of the finite-dimensional fractional Laplacian. An important relation with the Ornstein-Uhlenbeck process is given. Using a semigroup approach, the solution of the Cauchy problem associated to the fractional number operator is presented. By means of the Mittag-Leffler function and the Laplace transform, we give the solution of the Caputo time fractional diffusion equation and Riemann-Liouville time fractional diffusion equation in infinite dimensions associated to the fractional number operator.

  15. Theoretical Limits of Damping Attainable by Smart Beams with Rate Feedback

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1997-01-01

    Using a generally accepted model we present a comprehensive analysis (within the page limitation) of an Euler- Bernoulli beam with PZT sensor-actuator and pure rate feedback. The emphasis is on the root locus - the dependence of the attainable damping on the feedback gain. There is a critical value of the gain beyond which the damping decreases to zero. We construct the time-domain response using semigroup theory, and show that the eigenfunctions form a Riesz basis, leading to a 'modal' expansion.

  16. Convolution of Two Series

    ERIC Educational Resources Information Center

    Umar, A.; Yusau, B.; Ghandi, B. M.

    2007-01-01

    In this note, we introduce and discuss convolutions of two series. The idea is simple and can be introduced to higher secondary school classes, and has the potential of providing a good background for the well known convolution of function.

  17. A fast complex integer convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; K Truong, T.

    1978-01-01

    It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.

  18. Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network

    DTIC Science & Technology

    1989-08-01

    Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error

  19. Modeling and Simulation of a Non-Coherent Frequency Shift Keying Transceiver Using a Field Programmable Gate Array (FPGA)

    DTIC Science & Technology

    2008-09-01

    Convolutional Encoder Block Diagram of code rate 1 2 r = and...most commonly used along with block codes . They were introduced in 1955 by Elias [7]. Convolutional codes are characterized by the code rate kr n... convolutional code for 1 2 r = and = 3κ , namely [7 5], is used. Figure 2 Convolutional Encoder Block Diagram of code rate 1 2 r = and

  20. Investigating the detection of multi-homed devices independent of operating systems

    DTIC Science & Technology

    2017-09-01

    timestamp data was used to estimate clock skews using linear regression and linear optimization methods. Analysis revealed that detection depends on...the consistency of the estimated clock skew. Through vertical testing, it was also shown that clock skew consistency depends on the installed...optimization methods. Analysis revealed that detection depends on the consistency of the estimated clock skew. Through vertical testing, it was also

  1. Protograph-Based Raptor-Like Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.

    2014-01-01

    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.

  2. Convolution of large 3D images on GPU and its decomposition

    NASA Astrophysics Data System (ADS)

    Karas, Pavel; Svoboda, David

    2011-12-01

    In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.

  3. Image statistics and the perception of surface gloss and lightness.

    PubMed

    Kim, Juno; Anderson, Barton L

    2010-07-01

    Despite previous data demonstrating the critical importance of 3D surface geometry in the perception of gloss and lightness, I. Motoyoshi, S. Nishida, L. Sharan, and E. H. Adelson (2007) recently proposed that a simple image statistic--histogram or sub-band skew--is computed by the visual system to infer the gloss and albedo of surfaces. One key source of evidence used to support this claim was an experiment in which adaptation to skewed image statistics resulted in opponent aftereffects in observers' judgments of gloss and lightness. We report a series of adaptation experiments that were designed to assess the cause of these aftereffects. We replicated their original aftereffects in gloss but found no consistent aftereffect in lightness. We report that adaptation to zero-skew adaptors produced similar aftereffects as positively skewed adaptors, and that negatively skewed adaptors induced no reliable aftereffects. We further find that the adaptation effect observed with positively skewed adaptors is not robust to changes in mean luminance that diminish the intensity of the luminance extrema. Finally, we show that adaptation to positive skew reduces (rather than increases) the apparent lightness of light pigmentation on non-uniform albedo surfaces. These results challenge the view that the adaptation results reported by Motoyoshi et al. (2007) provide evidence that skew is explicitly computed by the visual system.

  4. Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments

    USGS Publications Warehouse

    Griffis, V.W.; Stedinger, Jery R.; Cohn, T.A.

    2004-01-01

    The recently developed expected moments algorithm (EMA) [Cohn et al., 1997] does as well as maximum likelihood estimations at estimating log‐Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.

  5. Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments

    NASA Astrophysics Data System (ADS)

    Griffis, V. W.; Stedinger, J. R.; Cohn, T. A.

    2004-07-01

    The recently developed expected moments algorithm (EMA) [, 1997] does as well as maximum likelihood estimations at estimating log-Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.

  6. Individual differences in loss aversion and preferences for skewed risks across adulthood.

    PubMed

    Seaman, Kendra L; Green, Mikella A; Shu, Stephen; Samanez-Larkin, Gregory R

    2018-06-01

    In a previous study, we found adult age differences in the tendency to accept more positively skewed gambles (with a small chance of a large win) than other equivalent risks, or an age-related positive-skew bias. In the present study, we examined whether loss aversion explained this bias. A total of 508 healthy participants (ages 21-82) completed measures of loss aversion and skew preference. Age was not related to loss aversion. Although loss aversion was a significant predictor of gamble acceptance, it did not influence the age-related positive-skew bias. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Flow in Rotating Serpentine Coolant Passages With Skewed Trip Strips

    NASA Technical Reports Server (NTRS)

    Tse, David G.N.; Steuber, Gary

    1996-01-01

    Laser velocimetry was utilized to map the velocity field in serpentine turbine blade cooling passages with skewed trip strips. The measurements were obtained at Reynolds and Rotation numbers of 25,000 and 0.24 to assess the influence of trips, passage curvature and Coriolis force on the flow field. The interaction of the secondary flows induced by skewed trips with the passage rotation produces a swirling vortex and a corner recirculation zone. With trips skewed at +45 deg, the secondary flows remain unaltered as the cross-flow proceeds from the passage to the turn. However, the flow characteristics at these locations differ when trips are skewed at -45 deg. Changes in the flow structure are expected to augment heat transfer, in agreement with the heat transfer measurements of Johnson, et al. The present results show that trips are skewed at -45 deg in the outward flow passage and trips are skewed at +45 deg in the inward flow passage maximize heat transfer. Details of the present measurements were related to the heat transfer measurements of Johnson, et al. to relate fluid flow and heat transfer measurements.

  8. Regional skew for California, and flood frequency for selected sites in the Sacramento-San Joaquin River Basin, based on data through water year 2006

    USGS Publications Warehouse

    Parrett, Charles; Veilleux, Andrea; Stedinger, J.R.; Barth, N.A.; Knifong, Donna L.; Ferris, J.C.

    2011-01-01

    Improved flood-frequency information is important throughout California in general and in the Sacramento-San Joaquin River Basin in particular, because of an extensive network of flood-control levees and the risk of catastrophic flooding. A key first step in updating flood-frequency information is determining regional skew. A Bayesian generalized least squares (GLS) regression method was used to derive a regional-skew model based on annual peak-discharge data for 158 long-term (30 or more years of record) stations throughout most of California. The desert areas in southeastern California had too few long-term stations to reliably determine regional skew for that hydrologically distinct region; therefore, the desert areas were excluded from the regional skew analysis for California. Of the 158 long-term stations used to determine regional skew, 145 have minimally regulated annual-peak discharges, and 13 stations are dam sites for which unregulated peak discharges were estimated from unregulated daily maximum discharge data furnished by the U.S. Army Corp of Engineers. Station skew was determined by using an expected moments algorithm (EMA) program for fitting the Pearson Type 3 flood-frequency distribution to the logarithms of annual peak-discharge data. The Bayesian GLS regression method previously developed was modified because of the large cross correlations among concurrent recorded peak discharges in California and the use of censored data and historical flood information with the new expected moments algorithm. In particular, to properly account for these cross-correlation problems and develop a suitable regression model and regression diagnostics, a combination of Bayesian weighted least squares and generalized least squares regression was adopted. This new methodology identified a nonlinear function relating regional skew to mean basin elevation. The regional skew values ranged from -0.62 for a mean basin elevation of zero to 0.61 for a mean basin elevation of 11,000 feet. This relation between skew and elevation reflects the interaction of snow with rain, which increases with increased elevation. The equivalent record length for the new regional skew ranges from 52 to 65 years of record, depending upon mean basin elevation. The old regional skew map in Bulletin 17B, published by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data (1982), reported an equivalent record length of only 17 years. The newly developed regional skew relation for California was used to update flood frequency for the 158 sites used in the regional skew analysis as well as 206 selected sites in the Sacramento-San Joaquin River Basin. For these sites, annual-peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years were determined on the basis of data through water year 2006. The expected moments algorithm was used for determining the magnitude and frequency of floods at gaged sites by using regional skew values and using the basic approach outlined in Bulletin

  9. Development and application of deep convolutional neural network in target detection

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaowei; Wang, Chunping; Fu, Qiang

    2018-04-01

    With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.

  10. A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.

    PubMed

    Rhiel, G Steven

    2007-02-01

    In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.

  11. A spectral nudging method for the ACCESS1.3 atmospheric model

    NASA Astrophysics Data System (ADS)

    Uhe, P.; Thatcher, M.

    2015-06-01

    A convolution-based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow for flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10-30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.

  12. A spectral nudging method for the ACCESS1.3 atmospheric model

    NASA Astrophysics Data System (ADS)

    Uhe, P.; Thatcher, M.

    2014-10-01

    A convolution based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10 to 30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.

  13. Weak interactions, omnivory and emergent food-web properties.

    PubMed

    Emmerson, Mark; Yearsley, Jon M

    2004-02-22

    Empirical studies have shown that, in real ecosystems, species-interaction strengths are generally skewed in their distribution towards weak interactions. Some theoretical work also suggests that weak interactions, especially in omnivorous links, are important for the local stability of a community at equilibrium. However, the majority of theoretical studies use uniform distributions of interaction strengths to generate artificial communities for study. We investigate the effects of the underlying interaction-strength distribution upon the return time, permanence and feasibility of simple Lotka-Volterra equilibrium communities. We show that a skew towards weak interactions promotes local and global stability only when omnivory is present. It is found that skewed interaction strengths are an emergent property of stable omnivorous communities, and that this skew towards weak interactions creates a dynamic constraint maintaining omnivory. Omnivory is more likely to occur when omnivorous interactions are skewed towards weak interactions. However, a skew towards weak interactions increases the return time to equilibrium, delays the recovery of ecosystems and hence decreases the stability of a community. When no skew is imposed, the set of stable omnivorous communities shows an emergent distribution of skewed interaction strengths. Our results apply to both local and global concepts of stability and are robust to the definition of a feasible community. These results are discussed in the light of empirical data and other theoretical studies, in conjunction with their broader implications for community assembly.

  14. Defining the cause of skewed X-chromosome inactivation in X-linked mental retardation by use of a mouse model.

    PubMed

    Muers, Mary R; Sharpe, Jacqueline A; Garrick, David; Sloane-Stanley, Jacqueline; Nolan, Patrick M; Hacker, Terry; Wood, William G; Higgs, Douglas R; Gibbons, Richard J

    2007-06-01

    Extreme skewing of X-chromosome inactivation (XCI) is rare in the normal female population but is observed frequently in carriers of some X-linked mutations. Recently, it has been shown that various forms of X-linked mental retardation (XLMR) have a strong association with skewed XCI in female carriers, but the mechanisms underlying this skewing are unknown. ATR-X syndrome, caused by mutations in a ubiquitously expressed, chromatin-associated protein, provides a clear example of XLMR in which phenotypically normal female carriers virtually all have highly skewed XCI biased against the X chromosome that harbors the mutant allele. Here, we have used a mouse model to understand the processes causing skewed XCI. In female mice heterozygous for a null Atrx allele, we found that XCI is balanced early in embryogenesis but becomes skewed over the course of development, because of selection favoring cells expressing the wild-type Atrx allele. Unexpectedly, selection does not appear to be the result of general cellular-viability defects in Atrx-deficient cells, since it is restricted to specific stages of development and is not ongoing throughout the life of the animal. Instead, there is evidence that selection results from independent tissue-specific effects. This illustrates an important mechanism by which skewed XCI may occur in carriers of XLMR and provides insight into the normal role of ATRX in regulating cell fate.

  15. Sociality, mating system and reproductive skew in marmots: evidence and hypotheses.

    PubMed

    Allainé

    2000-10-05

    Marmot species exhibit a great diversity of social structure, mating systems and reproductive skew. In particular, among the social species (i.e. all except Marmota monax), the yellow-bellied marmot appears quite different from the others. The yellow-bellied marmot is primarily polygynous with an intermediate level of sociality and low reproductive skew among females. In contrast, all other social marmot species are mainly monogamous, highly social and with marked reproductive skew among females. To understand the evolution of this difference in reproductive skew, I examined four possible explanations identified from reproductive skew theory. From the literature, I then reviewed evidence to investigate if marmot species differ in: (1) the ability of dominants to control the reproduction of subordinates; (2) the degree of relatedness between group members; (3) the benefit for subordinates of remaining in the social group; and (4) the benefit for dominants of retaining subordinates. I found that the optimal skew hypothesis may apply for both sets of species. I suggest that yellow-bellied marmot females may benefit from retaining subordinate females and in return have to concede them reproduction. On the contrary, monogamous marmot species may gain by suppressing the reproduction of subordinate females to maximise the efficiency of social thermoregulation, even at the risk of departure of subordinate females from the family group. Finally, I discuss scenarios for the simultaneous evolution of sociality, monogamy and reproductive skew in marmots.

  16. Earthquake fragility assessment of curved and skewed bridges in Mountain West region.

    DOT National Transportation Integrated Search

    2016-09-01

    Reinforced concrete (RC) bridges with both skew and curvature are common in areas with : complex terrains. Skewed and/or curved bridges were found in existing studies to exhibit more : complicated seismic performance than straight bridges, however th...

  17. Earthquake fragility assessment of curved and skewed bridges in Mountain West region : research brief.

    DOT National Transportation Integrated Search

    2016-09-01

    the ISSUE : the RESEARCH : Earthquake Fragility : Assessment of Curved : and Skewed Bridges in : Mountain West Region : Reinforced concrete bridges with both skew and curvature are common in areas with complex terrains. : These bridges are irregular ...

  18. Invariant Measures for Dissipative Dynamical Systems: Abstract Results and Applications

    NASA Astrophysics Data System (ADS)

    Chekroun, Mickaël D.; Glatt-Holtz, Nathan E.

    2012-12-01

    In this work we study certain invariant measures that can be associated to the time averaged observation of a broad class of dissipative semigroups via the notion of a generalized Banach limit. Consider an arbitrary complete separable metric space X which is acted on by any continuous semigroup { S( t)} t ≥ 0. Suppose that { S( t)} t ≥ 0 possesses a global attractor {{A}}. We show that, for any generalized Banach limit LIM T → ∞ and any probability distribution of initial conditions {{m}_0}, that there exists an invariant probability measure {{m}}, whose support is contained in {{A}}, such that intX \\varphi(x) d{m}(x) = \\underset{t rightarrow infty}LIM1/T int_0^T int_X \\varphi(S(t) x) d{m}_0(x) dt, for all observables φ living in a suitable function space of continuous mappings on X. This work is based on the framework of Foias et al. (Encyclopedia of mathematics and its applications, vol 83. Cambridge University Press, Cambridge, 2001); it generalizes and simplifies the proofs of more recent works (Wang in Disc Cont Dyn Syst 23(1-2):521-540, 2009; Lukaszewicz et al. in J Dyn Diff Eq 23(2):225-250, 2011). In particular our results rely on the novel use of a general but elementary topological observation, valid in any metric space, which concerns the growth of continuous functions in the neighborhood of compact sets. In the case when { S( t)} t ≥ 0 does not possess a compact absorbing set, this lemma allows us to sidestep the use of weak compactness arguments which require the imposition of cumbersome weak continuity conditions and thus restricts the phase space X to the case of a reflexive Banach space. Two examples of concrete dynamical systems where the semigroup is known to be non-compact are examined in detail. We first consider the Navier-Stokes equations with memory in the diffusion terms. This is the so called Jeffery's model which describes certain classes of viscoelastic fluids. We then consider a family of neutral delay differential equations, that is equations with delays in the time derivative terms. These systems may arise in the study of wave propagation problems coming from certain first order hyperbolic partial differential equations; for example for the study of line transmission problems. For the second example the phase space is {X= C([-tau,0],{R}^n)}, for some delay τ > 0, so that X is not reflexive in this case.

  19. Mapping of quantitative trait loci using the skew-normal distribution.

    PubMed

    Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos

    2007-11-01

    In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.

  20. Cross-Layer Design for Robust and Scalable Video Transmission in Dynamic Wireless Environment

    DTIC Science & Technology

    2011-02-01

    code rate convolutional codes or prioritized Rate - Compatible Punctured ...34New rate - compatible punctured convolutional codes for Viterbi decoding," IEEE Trans. Communications, Volume 42, Issue 12, pp. 3073-3079, Dec...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise

  1. A Video Transmission System for Severely Degraded Channels

    DTIC Science & Technology

    2006-07-01

    rate compatible punctured convolutional codes (RCPC) . By separating the SPIHT bitstream...June 2000. 149 [170] J. Hagenauer, Rate - compatible punctured convolutional codes (RCPC codes ) and their applications, IEEE Transactions on...Farvardin [160] used rate compatible convolutional codes . They noticed that for some transmission rates , one of their EEP schemes, which may

  2. There is no MacWilliams identity for convolutional codes. [transmission gain comparison

    NASA Technical Reports Server (NTRS)

    Shearer, J. B.; Mceliece, R. J.

    1977-01-01

    An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.

  3. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network

    PubMed Central

    Qu, Xiaobo; He, Yifan

    2018-01-01

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666

  4. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.

    PubMed

    Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di

    2018-03-06

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.

  5. The formulation and estimation of a spatial skew-normal generalized ordered-response model.

    DOT National Transportation Integrated Search

    2016-06-01

    This paper proposes a new spatial generalized ordered response model with skew-normal kernel error terms and an : associated estimation method. It contributes to the spatial analysis field by allowing a flexible and parametric skew-normal : distribut...

  6. Arc voltage distribution skewness as an indicator of electrode gap during vacuum arc remelting

    DOEpatents

    Williamson, Rodney L.; Zanner, Frank J.; Grose, Stephen M.

    1998-01-01

    The electrode gap of a VAR is monitored by determining the skewness of a distribution of gap voltage measurements. A decrease in skewness indicates an increase in gap and may be used to control the gap.

  7. System and method for adaptively deskewing parallel data signals relative to a clock

    DOEpatents

    Jenkins, Philip Nord [Eau Claire, WI; Cornett, Frank N [Chippewa Falls, WI

    2008-10-07

    A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.

  8. System and method for adaptively deskewing parallel data signals relative to a clock

    DOEpatents

    Jenkins, Philip Nord [Redwood Shores, CA; Cornett, Frank N [Chippewa Falls, WI

    2011-10-04

    A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.

  9. Some case studies of skewed (and other ab-normal) data distributions arising in low-level environmental research.

    PubMed

    Currie, L A

    2001-07-01

    Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85Kr and 14C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger-Muller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test--for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14C-12C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban "soot". The third, environmentally, skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom.

  10. Study on compensation algorithm of head skew in hard disk drives

    NASA Astrophysics Data System (ADS)

    Xiao, Yong; Ge, Xiaoyu; Sun, Jingna; Wang, Xiaoyan

    2011-10-01

    In hard disk drives (HDDs), head skew among multiple heads is pre-calibrated during manufacturing process. In real applications with high capacity of storage, the head stack may be tilted due to environmental change, resulting in additional head skew errors from outer diameter (OD) to inner diameter (ID). In case these errors are below the preset threshold for power on recalibration, the current strategy may not be aware, and drive performance under severe environment will be degraded. In this paper, in-the-field compensation of small DC head skew variation across stroke is proposed, where a zone table has been equipped. Test results demonstrating its effectiveness to reduce observer error and to enhance drive performance via accurate prediction of DC head skew are provided.

  11. Asymmetric skew Bessel processes and their applications to finance

    NASA Astrophysics Data System (ADS)

    Decamps, Marc; Goovaerts, Marc; Schoutens, Wim

    2006-02-01

    In this paper, we extend the Harrison and Shepp's construction of the skew Brownian motion (1981) and we obtain a diffusion similar to the two-dimensional Bessel process with speed and scale densities discontinuous at one point. Natural generalizations to multi-dimensional and fractional order Bessel processes are then discussed as well as invariance properties. We call this family of diffusions asymmetric skew Bessel processes in opposition to skew Bessel processes as defined in Barlow et al. [On Walsh's Brownian motions, Seminaire de Probabilities XXIII, Lecture Notes in Mathematics, vol. 1372, Springer, Berlin, New York, 1989, pp. 275-293]. We present factorizations involving (asymmetric skew) Bessel processes with random time. Finally, applications to the valuation of perpetuities and Asian options are proposed.

  12. Arc voltage distribution skewness as an indicator of electrode gap during vacuum arc remelting

    DOEpatents

    Williamson, R.L.; Zanner, F.J.; Grose, S.M.

    1998-01-13

    The electrode gap of a VAR is monitored by determining the skewness of a distribution of gap voltage measurements. A decrease in skewness indicates an increase in gap and may be used to control the gap. 4 figs.

  13. Steel framing strategies for highly skewed bridges to reduce/eliminate distortion near skewed supports.

    DOT National Transportation Integrated Search

    2014-05-01

    Different problems in straight skewed steel I-girder bridges are often associated with the methods used for detailing the cross-frames. Use of theoretical terms to describe these detailing methods and absence of complete and simplified design approac...

  14. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    PubMed

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. A comparison of the convolution and TMR10 treatment planning algorithms for Gamma Knife® radiosurgery

    PubMed Central

    Wright, Gavin; Harrold, Natalie; Bownes, Peter

    2018-01-01

    Aims To compare the accuracies of the convolution and TMR10 Gamma Knife treatment planning algorithms, and assess the impact upon clinical practice of implementing convolution-based treatment planning. Methods Doses calculated by both algorithms were compared against ionisation chamber measurements in homogeneous and heterogeneous phantoms. Relative dose distributions calculated by both algorithms were compared against film-derived 2D isodose plots in a heterogeneous phantom, with distance-to-agreement (DTA) measured at the 80%, 50% and 20% isodose levels. A retrospective planning study compared 19 clinically acceptable metastasis convolution plans against TMR10 plans with matched shot times, allowing novel comparison of true dosimetric parameters rather than total beam-on-time. Gamma analysis and dose-difference analysis were performed on each pair of dose distributions. Results Both algorithms matched point dose measurement within ±1.1% in homogeneous conditions. Convolution provided superior point-dose accuracy in the heterogeneous phantom (-1.1% v 4.0%), with no discernible differences in relative dose distribution accuracy. In our study convolution-calculated plans yielded D99% 6.4% (95% CI:5.5%-7.3%,p<0.001) less than shot matched TMR10 plans. For gamma passing criteria 1%/1mm, 16% of targets had passing rates >95%. The range of dose differences in the targets was 0.2-4.6Gy. Conclusions Convolution provides superior accuracy versus TMR10 in heterogeneous conditions. Implementing convolution would result in increased target doses therefore its implementation may require a revaluation of prescription doses. PMID:29657896

  16. Design of Intelligent Cross-Layer Routing Protocols for Airborne Wireless Networks Under Dynamic Spectrum Access Paradigm

    DTIC Science & Technology

    2011-05-01

    rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The

  17. Convolution Operation of Optical Information via Quantum Storage

    NASA Astrophysics Data System (ADS)

    Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan

    2017-06-01

    We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.

  18. Estimating Isometric Tension of Finger Muscle Using Needle EMG Signals and the Twitch Contraction Model

    NASA Astrophysics Data System (ADS)

    Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko

    We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.

  19. High Performance Implementation of 3D Convolutional Neural Networks on a GPU.

    PubMed

    Lan, Qiang; Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie

    2017-01-01

    Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.

  20. High Performance Implementation of 3D Convolutional Neural Networks on a GPU

    PubMed Central

    Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie

    2017-01-01

    Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version. PMID:29250109

  1. On some new properties of fractional derivatives with Mittag-Leffler kernel

    NASA Astrophysics Data System (ADS)

    Baleanu, Dumitru; Fernandez, Arran

    2018-06-01

    We establish a new formula for the fractional derivative with Mittag-Leffler kernel, in the form of a series of Riemann-Liouville fractional integrals, which brings out more clearly the non-locality of fractional derivatives and is easier to handle for certain computational purposes. We also prove existence and uniqueness results for certain families of linear and nonlinear fractional ODEs defined using this fractional derivative. We consider the possibility of a semigroup property for these derivatives, and establish extensions of the product rule and chain rule, with an application to fractional mechanics.

  2. Commutative semigroups of real and complex matrices. [with use of the jordan form

    NASA Technical Reports Server (NTRS)

    Brown, D. R.

    1974-01-01

    The computation of divergence is studied. Covariance matrices to be analyzed admit a common diagonalization, or even triangulation. Sufficient conditions are given for such phenomena to take place, the arguments cover both real and complex matrices, and are not restricted to Hermotian or other special forms. Specifically, it is shown to be sufficient that the matrices in question commute in order to admit a common triangulation. Several results hold in the case that the matrices in question form a closed and bounded set, rather than only in the finite case.

  3. Elliptic operators with unbounded diffusion, drift and potential terms

    NASA Astrophysics Data System (ADS)

    Boutiah, S. E.; Gregorio, F.; Rhandi, A.; Tacelli, C.

    2018-02-01

    We prove that the realization Ap in Lp (RN) , 1 < p < ∞, of the elliptic operator A = (1 + | x|α) Δ + b | x| α - 1 x/|x| ṡ ∇ - c | x|β with domain D (Ap) = { u ∈W 2 , p (RN) | Au ∈Lp (RN) } generates a strongly continuous analytic semigroup T (ṡ) provided that α > 2 , β > α - 2 and any constants b ∈ R and c > 0. This generalizes the recent results in [4] and in [16]. Moreover we show that T (ṡ) is consistent, immediately compact and ultracontractive.

  4. The scaling of weak field phase-only control in Markovian dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Am-Shallem, Morag; Kosloff, Ronnie

    We consider population transfer in open quantum systems, which are described by quantum dynamical semigroups (QDS). Using second order perturbation theory of the Lindblad equation, we show that it depends on a weak external field only through the field's autocorrelation function, which is phase independent. Therefore, for leading order in perturbation, QDS cannot support dependence of the population transfer on the phase properties of weak fields. We examine an example of weak-field phase-dependent population transfer, and show that the phase-dependence comes from the next order in the perturbation.

  5. Necessary optimality conditions for infinite dimensional state constrained control problems

    NASA Astrophysics Data System (ADS)

    Frankowska, H.; Marchini, E. M.; Mazzola, M.

    2018-06-01

    This paper is concerned with first order necessary optimality conditions for state constrained control problems in separable Banach spaces. Assuming inward pointing conditions on the constraint, we give a simple proof of Pontryagin maximum principle, relying on infinite dimensional neighboring feasible trajectories theorems proved in [20]. Further, we provide sufficient conditions guaranteeing normality of the maximum principle. We work in the abstract semigroup setting, but nevertheless we apply our results to several concrete models involving controlled PDEs. Pointwise state constraints (as positivity of the solutions) are allowed.

  6. A Note on the Asymptotic Behavior of Nonlinear Semigroups and the Range of Accretive Operators.

    DTIC Science & Technology

    1981-04-01

    Crandall (see [2, p. 166]) and Pazy [10) in Hilbert space. For recent developments in Ranach spaces see the papers by Kohlberg and Neyman [8, 9] and...essentially due to Kohlberg and Neyman [91 who use a different argument. They also show that if E is not reflexive and strictly convex (or if E* is...ACKNOWLEDGMENTS. I am grateful to Professor A. Pazy for several helpful conversations. I also wish to thank 5. Kohlberg , A. Neyman and A. T. Plant for

  7. Convoluted nozzle design for the RL10 derivative 2B engine

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.

  8. Local dynamic range compensation for scanning electron microscope imaging system by sub-blocking multiple peak HE with convolution.

    PubMed

    Sim, K S; Teh, V; Tey, Y C; Kho, T K

    2016-11-01

    This paper introduces new development technique to improve the Scanning Electron Microscope (SEM) image quality and we name it as sub-blocking multiple peak histogram equalization (SUB-B-MPHE) with convolution operator. By using this new proposed technique, it shows that the new modified MPHE performs better than original MPHE. In addition, the sub-blocking method consists of convolution operator which can help to remove the blocking effect for SEM images after applying this new developed technique. Hence, by using the convolution operator, it effectively removes the blocking effect by properly distributing the suitable pixel value for the whole image. Overall, the SUB-B-MPHE with convolution outperforms the rest of methods. SCANNING 38:492-501, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  9. Measuring Skewness: A Forgotten Statistic?

    ERIC Educational Resources Information Center

    Doane, David P.; Seward, Lori E.

    2011-01-01

    This paper discusses common approaches to presenting the topic of skewness in the classroom, and explains why students need to know how to measure it. Two skewness statistics are examined: the Fisher-Pearson standardized third moment coefficient, and the Pearson 2 coefficient that compares the mean and median. The former is reported in statistical…

  10. Learning a Novel Pattern through Balanced and Skewed Input

    ERIC Educational Resources Information Center

    McDonough, Kim; Trofimovich, Pavel

    2013-01-01

    This study compared the effectiveness of balanced and skewed input at facilitating the acquisition of the transitive construction in Esperanto, characterized by the accusative suffix "-n" and variable word order (SVO, OVS). Thai university students (N = 98) listened to 24 sentences under skewed (one noun with high token frequency) or…

  11. Numerical solution for the velocity-derivative skewness of a low-Reynolds-number decaying Navier-Stokes flow

    NASA Technical Reports Server (NTRS)

    Deissler, Robert G.

    1990-01-01

    The variation of the velocity-derivative skewness of a Navier-Stokes flow as the Reynolds number goes toward zero is calculated numerically. The value of the skewness, which has been somewhat controversial, is shown to become small at low Reynolds numbers.

  12. Investigation of free vibration characteristics for skew multiphase magneto-electro-elastic plate

    NASA Astrophysics Data System (ADS)

    Kiran, M. C.; Kattimani, S.

    2018-04-01

    This article presents the investigation of skew multiphase magneto-electro-elastic (MMEE) plate to assess its free vibration characteristics. A finite element (FE) model is formulated considering the different couplings involved via coupled constitutive equations. The transformation matrices are derived to transform local degrees of freedom into the global degrees of freedom for the nodes lying on the skew edges. Effect of different volume fraction (Vf) on the free vibration behavior is explicitly studied. In addition, influence of width to thickness ratio, the aspect ratio, and the stacking arrangement on natural frequencies of skew multiphase MEE plate investigated. Particular attention has been paid to investigate the effect of skew angle on the non-dimensional Eigen frequencies of multiphase MEE plate with simply supported edges.

  13. Skew information in the XY model with staggered Dzyaloshinskii-Moriya interaction

    NASA Astrophysics Data System (ADS)

    Qiu, Liang; Quan, Dongxiao; Pan, Fei; Liu, Zhi

    2017-06-01

    We study the performance of the lower bound of skew information in the vicinity of transition point for the anisotropic spin-1/2 XY chain with staggered Dzyaloshinskii-Moriya interaction by use of quantum renormalization-group method. For a fixed value of the Dzyaloshinskii-Moriya interaction, there are two saturated values for the lower bound of skew information corresponding to the spin-fluid and Néel phases, respectively. The scaling exponent of the lower bound of skew information closely relates to the correlation length of the model and the Dzyaloshinskii-Moriya interaction shifts the factorization point. Our results show that the lower bound of skew information can be a good candidate to detect the critical point of XY spin chain with staggered Dzyaloshinskii-Moriya interaction.

  14. Geometric mean IELT and premature ejaculation: appropriate statistics to avoid overestimation of treatment efficacy.

    PubMed

    Waldinger, Marcel D; Zwinderman, Aeilko H; Olivier, Berend; Schweitzer, Dave H

    2008-02-01

    The intravaginal ejaculation latency time (IELT) behaves in a skewed manner and needs the appropriate statistics for correct interpretation of treatment results. To explain the rightful use of geometrical mean IELT values and the fold increase of the geometric mean IELT because of the positively skewed IELT distribution. Linking theoretical arguments to the outcome of several selective serotonin reuptake inhibitor and modern antidepressant study results. Geometric mean IELT and fold increase of geometrical mean IELT. Log-transforming each separate IELT measurement of each individual man is the basis for the calculation of the geometric mean IELT. A drug-induced positively skewed IELT distribution necessitates the calculation of the geometric mean IELTs at baseline and during drug treatment. In a positively skewed IELT distribution, the use of the "arithmetic" mean IELT risks an overestimation of the drug-induced ejaculation delay as the mean IELT is always higher than the geometric mean IELT. Strong ejaculation-delaying drugs give rise to a strong positively skewed IELT distribution, whereas weak ejaculation-delaying drugs give rise to (much) less skewed IELT distributions. Ejaculation delay is expressed in fold increase of the geometric mean IELT. Drug-induced ejaculatory performance discloses a positively skewed IELT distribution, requiring the use of the geometric mean IELT and the fold increase of the geometric mean IELT.

  15. Adaptive Neural Mechanism for Listing’s Law Revealed in Patients with Skew Deviation Caused by Brainstem or Cerebellar Lesion

    PubMed Central

    Fesharaki, Maryam; Karagiannis, Peter; Tweed, Douglas; Sharpe, James A.; Wong, Agnes M. F.

    2016-01-01

    Purpose Skew deviation is a vertical strabismus caused by damage to the otolithic–ocular reflex pathway and is associated with abnormal ocular torsion. This study was conducted to determine whether patients with skew deviation show the normal pattern of three-dimensional eye control called Listing’s law, which specifies the eye’s torsional angle as a function of its horizontal and vertical position. Methods Ten patients with skew deviation caused by brain stem or cerebellar lesions and nine normal control subjects were studied. Patients with diplopia and neurologic symptoms less than 1 month in duration were designated as acute (n = 4) and those with longer duration were classified as chronic (n = 10). Serial recordings were made in the four patients with acute skew deviation. With the head immobile, subjects made saccades to a target that moved between straight ahead and eight eccentric positions, while wearing search coils. At each target position, fixation was maintained for 3 seconds before the next saccade. From the eye position data, the plane of best fit, referred to as Listing’s plane, was fitted. Violations of Listing’s law were quantified by computing the “thickness” of this plane, defined as the SD of the distances to the plane from the data points. Results Both the hypertropic and hypotropic eyes in patients with acute skew deviation violated Listing’s and Donders’ laws—that is, the eyes did not show one consistent angle of torsion in any given gaze direction, but rather an abnormally wide range of torsional angles. In contrast, each eye in patients with chronic skew deviation obeyed the laws. However, in chronic skew deviation, Listing’s planes in both eyes had abnormal orientations. Conclusions Patients with acute skew deviation violated Listing’s law, whereas those with chronic skew deviation obeyed it, indicating that despite brain lesions, neural adaptation can restore Listing’s law so that the neural linkage between horizontal, vertical, and torsional eye position remains intact. Violation of Listing’s and Donders’ laws during fixation arises primarily from torsional drifts, indicating that patients with acute skew deviation have unstable torsional gaze holding that is independent of their horizontal–vertical eye positions. PMID:18172094

  16. Scalable Video Transmission Over Multi-Rate Multiple Access Channels

    DTIC Science & Technology

    2007-06-01

    Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on

  17. Wireless Visual Sensor Network Resource Allocation using Cross-Layer Optimization

    DTIC Science & Technology

    2009-01-01

    Rate Compatible Punctured Convolutional (RCPC) codes for channel...vol. 44, pp. 2943–2959, November 1998. [22] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE... coding rate for H.264/AVC video compression is determined. At the data link layer, the Rate - Compatible Puctured Convolutional (RCPC) channel coding

  18. The general theory of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  19. Performance-based seismic assessment of skewed bridges with and without considering soil-foundation interaction effects for various site classes

    NASA Astrophysics Data System (ADS)

    Ghotbi, Abdoul R.

    2014-09-01

    The seismic behavior of skewed bridges has not been well studied compared to straight bridges. Skewed bridges have shown extensive damage, especially due to deck rotation, shear keys failure, abutment unseating and column-bent drift. This research, therefore, aims to study the behavior of skewed and straight highway overpass bridges both with and without taking into account the effects of Soil-Structure Interaction (SSI) due to near-fault ground motions. Due to several sources of uncertainty associated with the ground motions, soil and structure, a probabilistic approach is needed. Thus, a probabilistic methodology similar to the one developed by the Pacific Earthquake Engineering Research Center (PEER) has been utilized to assess the probability of damage due to various levels of shaking using appropriate intensity measures with minimum dispersions. The probabilistic analyses were performed for various bridge configurations and site conditions, including sand ranging from loose to dense and clay ranging from soft to stiff, in order to evaluate the effects. The results proved a considerable susceptibility of skewed bridges to deck rotation and shear keys displacement. It was also found that SSI had a decreasing effect on the damage probability for various demands compared to the fixed-base model without including SSI. However, deck rotation for all types of the soil and also abutment unseating for very loose sand and soft clay showed an increase in damage probability compared to the fixed-base model. The damage probability for various demands has also been found to decrease with an increase of soil strength for both sandy and clayey sites. With respect to the variations in the skew angle, an increase in skew angle has had an increasing effect on the amplitude of the seismic response for various demands. Deck rotation has been very sensitive to the increase in the skew angle; therefore, as the skew angle increased, the deck rotation responded accordingly. Furthermore, abutment unseating showed an increasing trend due to an increase in skew angle for both fixed-base and SSI models.

  20. Computational analysis of current-loss mechanisms in a post-hole convolute driven by magnetically insulated transmission lines

    DOE PAGES

    Rose, D.  V.; Madrid, E.  A.; Welch, D.  R.; ...

    2015-03-04

    Numerical simulations of a vacuum post-hole convolute driven by magnetically insulated vacuum transmission lines (MITLs) are used to study current losses due to charged particle emission from the MITL-convolute-system electrodes. This work builds on the results of a previous study [E.A. Madrid et al. Phys. Rev. ST Accel. Beams 16, 120401 (2013)] and adds realistic power pulses, Ohmic heating of anode surfaces, and a model for the formation and evolution of cathode plasmas. The simulations suggest that modestly larger anode-cathode gaps in the MITLs upstream of the convolute result in significantly less current loss. In addition, longer pulse durations leadmore » to somewhat greater current loss due to cathode-plasma expansion. These results can be applied to the design of future MITL-convolute systems for high-current pulsed-power systems.« less

  1. Classification of urine sediment based on convolution neural network

    NASA Astrophysics Data System (ADS)

    Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian

    2018-04-01

    By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.

  2. Sample Skewness as a Statistical Measurement of Neuronal Tuning Sharpness

    PubMed Central

    Samonds, Jason M.; Potetz, Brian R.; Lee, Tai Sing

    2014-01-01

    We propose using the statistical measurement of the sample skewness of the distribution of mean firing rates of a tuning curve to quantify sharpness of tuning. For some features, like binocular disparity, tuning curves are best described by relatively complex and sometimes diverse functions, making it difficult to quantify sharpness with a single function and parameter. Skewness provides a robust nonparametric measure of tuning curve sharpness that is invariant with respect to the mean and variance of the tuning curve and is straightforward to apply to a wide range of tuning, including simple orientation tuning curves and complex object tuning curves that often cannot even be described parametrically. Because skewness does not depend on a specific model or function of tuning, it is especially appealing to cases of sharpening where recurrent interactions among neurons produce sharper tuning curves that deviate in a complex manner from the feedforward function of tuning. Since tuning curves for all neurons are not typically well described by a single parametric function, this model independence additionally allows skewness to be applied to all recorded neurons, maximizing the statistical power of a set of data. We also compare skewness with other nonparametric measures of tuning curve sharpness and selectivity. Compared to these other nonparametric measures tested, skewness is best used for capturing the sharpness of multimodal tuning curves defined by narrow peaks (maximum) and broad valleys (minima). Finally, we provide a more formal definition of sharpness using a shape-based information gain measure and derive and show that skewness is correlated with this definition. PMID:24555451

  3. New Families of Skewed Higher-Order Kernel Estimators to Solve the BSS/ICA Problem for Multimodal Sources Mixtures.

    PubMed

    Jabbar, Ahmed Najah

    2018-04-13

    This letter suggests two new types of asymmetrical higher-order kernels (HOK) that are generated using the orthogonal polynomials Laguerre (positive or right skew) and Bessel (negative or left skew). These skewed HOK are implemented in the blind source separation/independent component analysis (BSS/ICA) algorithm. The tests for these proposed HOK are accomplished using three scenarios to simulate a real environment using actual sound sources, an environment of mixtures of multimodal fast-changing probability density function (pdf) sources that represent a challenge to the symmetrical HOK, and an environment of an adverse case (near gaussian). The separation is performed by minimizing the mutual information (MI) among the mixed sources. The performance of the skewed kernels is compared to the performance of the standard kernels such as Epanechnikov, bisquare, trisquare, and gaussian and the performance of the symmetrical HOK generated using the polynomials Chebyshev1, Chebyshev2, Gegenbauer, Jacobi, and Legendre to the tenth order. The gaussian HOK are generated using the Hermite polynomial and the Wand and Schucany procedure. The comparison among the 96 kernels is based on the average intersymbol interference ratio (AISIR) and the time needed to complete the separation. In terms of AISIR, the skewed kernels' performance is better than that of the standard kernels and rivals most of the symmetrical kernels' performance. The importance of these new skewed HOK is manifested in the environment of the multimodal pdf mixtures. In such an environment, the skewed HOK come in first place compared with the symmetrical HOK. These new families can substitute for symmetrical HOKs in such applications.

  4. X Chromosome Inactivation in Women with Alcoholism

    PubMed Central

    Manzardo, Ann M.; Henkhaus, Rebecca; Hidaka, Brandon; Penick, Elizabeth C.; Poje, Albert B.; Butler, Merlin G.

    2012-01-01

    Background All female mammals with two X chromosomes balance gene expression with males having only one X by inactivating one of their Xs (X chromosome inactivation, XCI). Analysis of XCI in females offers the opportunity to investigate both X-linked genetic factors and early embryonic development that may contribute to alcoholism. Increases in the prevalence of skewing of XCI in women with alcoholism could implicate biological risk factors. Methods The pattern of XCI was examined in DNA isolated in blood from 44 adult females meeting DSM IV criteria for an Alcohol Use Disorder, and 45 control females with no known history of alcohol abuse or dependence. XCI status was determined by analyzing digested and undigested polymerase chain reaction (PCR) products of the polymorphic androgen receptor (AR) gene located on the X chromosome. Subjects were categorized into 3 groups based upon the degree of XCI skewness: random (50:50–64:36), moderately skewed (65:35–80:20) and highly skewed (>80:20). Results XCI status from informative females with alcoholism was found to be random in 59% (n=26), moderately skewed in 27% (n=12) or highly skewed in 14% (n=6). Control subjects showed 60%, 29% and 11%, respectively. The distribution of skewed XCI observed among women with alcoholism did not differ statistically from that of control subjects (χ2 =0.14, 2 df, p=0.93). Conclusions Our data did not support an increase in XCI skewness among women with alcoholism or implicate early developmental events associated with embryonic cell loss or unequal (non-random) expression of X-linked gene(s) or defects in alcoholism among females. PMID:22375556

  5. Defining surfaces for skewed, highly variable data

    USGS Publications Warehouse

    Helsel, D.R.; Ryker, S.J.

    2002-01-01

    Skewness of environmental data is often caused by more than simply a handful of outliers in an otherwise normal distribution. Statistical procedures for such datasets must be sufficiently robust to deal with distributions that are strongly non-normal, containing both a large proportion of outliers and a skewed main body of data. In the field of water quality, skewness is commonly associated with large variation over short distances. Spatial analysis of such data generally requires either considerable effort at modeling or the use of robust procedures not strongly affected by skewness and local variability. Using a skewed dataset of 675 nitrate measurements in ground water, commonly used methods for defining a surface (least-squares regression and kriging) are compared to a more robust method (loess). Three choices are critical in defining a surface: (i) is the surface to be a central mean or median surface? (ii) is either a well-fitting transformation or a robust and scale-independent measure of center used? (iii) does local spatial autocorrelation assist in or detract from addressing objectives? Published in 2002 by John Wiley & Sons, Ltd.

  6. Generalized Skew Coefficients of Annual Peak Flows for Rural, Unregulated Streams in West Virginia

    USGS Publications Warehouse

    Atkins, John T.; Wiley, Jeffrey B.; Paybins, Katherine S.

    2009-01-01

    Generalized skew was determined from analysis of records from 147 streamflow-gaging stations in or near West Virginia. The analysis followed guidelines established by the Interagency Advisory Committee on Water Data described in Bulletin 17B, except that stations having 50 or more years of record were used instead of stations with the less restrictive recommendation of 25 or more years of record. The generalized-skew analysis included contouring, averaging, and regression of station skews. The best method was considered the one with the smallest mean square error (MSE). MSE is defined as the following quantity summed and divided by the number of peaks: the square of the difference of an individual logarithm (base 10) of peak flow less the mean of all individual logarithms of peak flow. Contouring of station skews was the best method for determining generalized skew for West Virginia, with a MSE of about 0.2174. This MSE is an improvement over the MSE of about 0.3025 for the national map presented in Bulletin 17B.

  7. Modeling absolute differences in life expectancy with a censored skew-normal regression approach

    PubMed Central

    Clough-Gorr, Kerri; Zwahlen, Marcel

    2015-01-01

    Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest. PMID:26339544

  8. Halo Pressure Profile through the Skew Cross-power Spectrum of the Sunyaev-Zel’dovich Effect and CMB Lensing in Planck

    NASA Astrophysics Data System (ADS)

    Timmons, Nicholas; Cooray, Asantha; Feng, Chang; Keating, Brian

    2017-11-01

    We measure the cosmic microwave background (CMB) skewness power spectrum in Planck, using frequency maps of the HFI instrument and the Sunyaev-Zel’dovich (SZ) component map. The two-to-one skewness power spectrum measures the cross-correlation between CMB lensing and the thermal SZ effect. We also directly measure the same cross-correlation using the Planck CMB lensing map and the SZ map and compare it to the cross-correlation derived from the skewness power spectrum. We model fit the SZ power spectrum and CMB lensing-SZ cross-power spectrum via the skewness power spectrum to constrain the gas pressure profile of dark matter halos. The gas pressure profile is compared to existing measurements in the literature including a direct estimate based on the stacking of SZ clusters in Planck.

  9. Linear diffusion-wave channel routing using a discrete Hayami convolution method

    Treesearch

    Li Wang; Joan Q. Wu; William J. Elliot; Fritz R. Feidler; Sergey Lapin

    2014-01-01

    The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces...

  10. Optimal convolution SOR acceleration of waveform relaxation with application to semiconductor device simulation

    NASA Technical Reports Server (NTRS)

    Reichelt, Mark

    1993-01-01

    In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.

  11. A Geometric Construction of Cyclic Cocycles on Twisted Convolution Algebras

    NASA Astrophysics Data System (ADS)

    Angel, Eitan

    2010-09-01

    In this thesis we give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. In his seminal book, Connes constructs a map from the equivariant cohomology of a manifold carrying the action of a discrete group into the periodic cyclic cohomology of the associated convolution algebra. Furthermore, for proper étale groupoids, J.-L. Tu and P. Xu provide a map between the periodic cyclic cohomology of a gerbe twisted convolution algebra and twisted cohomology groups. Our focus will be the convolution algebra with a product defined by a gerbe over a discrete translation groupoid. When the action is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial notions related to ideas of J. Dupont to construct a simplicial form representing the Dixmier-Douady class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial Dixmier-Douady form to the mixed bicomplex of certain matrix algebras. Finally, we define a morphism from this complex to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.

  12. Normalization of High Dimensional Genomics Data Where the Distribution of the Altered Variables Is Skewed

    PubMed Central

    Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per

    2011-01-01

    Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher sensitivity and lower bias than can be attained using standard and invariant normalization methods. PMID:22132175

  13. The work of Glenn F. Webb.

    PubMed

    Fitzgibbon, William E

    2015-08-01

    It is my distinct pleasure to introduce this volume honoring the 70th birthday of Professor Glenn F. Webb. The existence of this compiled volume is in itself a testimony of Glenn's dedication to, his pursuit of, and his achievement of scientific excellence. As we honor Glenn, we honor what is excellent in our profession. Aristotle clearly articulated his concept of excellence. ``We are what we repeatedly do. Excellence, then, is not an act, but a habit." As we look over the course of his career we observe ample evidence of Glenn Webb's habitual practice of excellence. Beginning with Glenn's first paper [1], one observes a constant stream of productivity and high impact work. Glenn has authored or co-authored over 160 papers, written one research monograph, and co-edited six volumes. He has delivered plenary lectures, colloquia, and seminars across the globe, and he serves on the editorial boards of 11 archival journals. He is a Fellow of the American Mathematical Society. Glenn's scientific career chronicles an evolution of scientific work that began with his interest in nonlinear semigroup theory and leads up to his current activity in biomedical mathematics. At each stage we see seminal contributions in the areas of nonlinear semigroups, functional differential equations, infinite dimensional dynamical systems, mathematical population dynamics, mathematical biology and biomedical mathematics. Glenn's work is distinguished by a clarity and accessibility of exposition, a precise identification and description of the problem or model under consideration, and thorough referencing. He uses elementary methods whenever possible but couples this with an ability to employ power abstract methods when necessitated by the problem.

  14. Devaney chaos, Li-Yorke chaos, and multi-dimensional Li-Yorke chaos for topological dynamics

    NASA Astrophysics Data System (ADS)

    Dai, Xiongping; Tang, Xinjia

    2017-11-01

    Let π : T × X → X, written T↷π X, be a topological semiflow/flow on a uniform space X with T a multiplicative topological semigroup/group not necessarily discrete. We then prove: If T↷π X is non-minimal topologically transitive with dense almost periodic points, then it is sensitive to initial conditions. As a result of this, Devaney chaos ⇒ Sensitivity to initial conditions, for this very general setting. Let R+↷π X be a C0-semiflow on a Polish space; then we show: If R+↷π X is topologically transitive with at least one periodic point p and there is a dense orbit with no nonempty interior, then it is multi-dimensional Li-Yorke chaotic; that is, there is a uncountable set Θ ⊆ X such that for any k ≥ 2 and any distinct points x1 , … ,xk ∈ Θ, one can find two time sequences sn → ∞ ,tn → ∞ with Moreover, let X be a non-singleton Polish space; then we prove: Any weakly-mixing C0-semiflow R+↷π X is densely multi-dimensional Li-Yorke chaotic. Any minimal weakly-mixing topological flow T↷π X with T abelian is densely multi-dimensional Li-Yorke chaotic. Any weakly-mixing topological flow T↷π X is densely Li-Yorke chaotic. We in addition construct a completely Li-Yorke chaotic minimal SL (2 , R)-acting flow on the compact metric space R ∪ { ∞ }. Our various chaotic dynamics are sensitive to the choices of the topology of the phase semigroup/group T.

  15. Opposite GC skews at the 5' and 3' ends of genes in unicellular fungi

    PubMed Central

    2011-01-01

    Background GC-skews have previously been linked to transcription in some eukaryotes. They have been associated with transcription start sites, with the coding strand G-biased in mammals and C-biased in fungi and invertebrates. Results We show a consistent and highly significant pattern of GC-skew within genes of almost all unicellular fungi. The pattern of GC-skew is asymmetrical: the coding strand of genes is typically C-biased at the 5' ends but G-biased at the 3' ends, with intermediate skews at the middle of genes. Thus, the initiation, elongation, and termination phases of transcription are associated with different skews. This pattern influences the encoded proteins by generating differential usage of amino acids at the 5' and 3' ends of genes. These biases also affect fourfold-degenerate positions and extend into promoters and 3' UTRs, indicating that skews cannot be accounted by selection for protein function or translation. Conclusions We propose two explanations, the mutational pressure hypothesis, and the adaptive hypothesis. The mutational pressure hypothesis is that different co-factors bind to RNA pol II at different phases of transcription, producing different mutational regimes. The adaptive hypothesis is that cytidine triphosphate deficiency may lead to C-avoidance at the 3' ends of transcripts to control the flow of RNA pol II molecules and reduce their frequency of collisions. PMID:22208287

  16. Effect of skew angle on second harmonic guided wave measurement in composite plates

    NASA Astrophysics Data System (ADS)

    Cho, Hwanjeong; Choi, Sungho; Lissenden, Cliff J.

    2017-02-01

    Waves propagating in anisotropic media are subject to skewing effects due to the media having directional wave speed dependence, which is characterized by slowness curves. Likewise, the generation of second harmonics is sensitive to micro-scale damage that is generally not detectable from linear features of ultrasonic waves. Here, the effect of skew angle on second harmonic guided wave measurement in a transversely isotropic lamina and a quasi-isotropic laminate are numerically studied. The strain energy density function for a nonlinear transversely isotropic material is formulated in terms of the Green-Lagrange strain invariants. The guided wave mode pairs for cumulative second harmonic generation in the plate are selected in accordance with the internal resonance criteria - i.e., phase matching and non-zero power flux. Moreover, the skew angle dispersion curves for the mode pairs are obtained from the semi-analytical finite element method using the derivative of the slowness curve. The skew angles of the primary and secondary wave modes are calculated and wave propagation simulations are carried out using COMSOL. Numerical simulations revealed that the effect of skew angle mismatch can be significant for second harmonic generation in anisotropic media. The importance of skew angle matching on cumulative second harmonic generation is emphasized and the accompanying issue of the selection of internally resonant mode pairs for both a unidirectional transversely isotropic lamina and a quasi-isotropic laminate is demonstrated.

  17. Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houshmand, Monireh; Hosseini-Khayat, Saied

    2011-02-15

    Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation andmore » practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.« less

  18. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  19. X chromosome inactivation in women with alcoholism.

    PubMed

    Manzardo, Ann M; Henkhaus, Rebecca; Hidaka, Brandon; Penick, Elizabeth C; Poje, Albert B; Butler, Merlin G

    2012-08-01

    All female mammals with 2 X chromosomes balance gene expression with males having only 1 X by inactivating one of their X chromosomes (X chromosome inactivation [XCI]). Analysis of XCI in females offers the opportunity to investigate both X-linked genetic factors and early embryonic development that may contribute to alcoholism. Increases in the prevalence of skewing of XCI in women with alcoholism could implicate biological risk factors. The pattern of XCI was examined in DNA isolated in blood from 44 adult women meeting DSM-IV criteria for an alcohol use disorder and 45 control women with no known history of alcohol abuse or dependence. XCI status was determined by analyzing digested and undigested polymerase chain reaction (PCR) products of the polymorphic androgen receptor (AR) gene located on the X chromosome. Subjects were categorized into 3 groups based upon the degree of XCI skewness: random (50:50 to 64:36%), moderately skewed (65:35 to 80:20%), and highly skewed (>80:20%). XCI status from informative women with alcoholism was found to be random in 59% (n = 26), moderately skewed in 27% (n = 12), or highly skewed in 14% (n = 6). Control subjects showed 60, 29, and 11%, respectively. The distribution of skewed XCI observed among women with alcoholism did not differ statistically from that of control subjects (χ(2) test = 0.14, 2 df, p = 0.93). Our data did not support an increase in XCI skewness among women with alcoholism or implicate early developmental events associated with embryonic cell loss or unequal (nonrandom) expression of X-linked gene(s) or defects in alcoholism among women. Copyright © 2012 by the Research Society on Alcoholism.

  20. Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms

    DTIC Science & Technology

    2007-09-01

    punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data

  1. Using convolutional decoding to improve time delay and phase estimation in digital communications

    DOEpatents

    Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  2. Quantifying the cross-sectional relationship between online sentiment and the skewness of stock returns

    NASA Astrophysics Data System (ADS)

    Shen, Dehua; Liu, Lanbiao; Zhang, Yongjie

    2018-01-01

    The constantly increasing utilization of social media as the alternative information channel, e.g., Twitter, provides us a unique opportunity to investigate the dynamics of the financial market. In this paper, we employ the daily happiness sentiment extracted from Twitter as the proxy for the online sentiment dynamics and investigate its association with the skewness of stock returns of 26 international stock market index returns. The empirical results show that: (1) by dividing the daily happiness sentiment into quintiles from the least to the most happiness days, the skewness of the Most-happiness subgroup is significantly larger than that of the Least-happiness subgroup. Besides, there exist significant differences in any pair of subgroups; (2) in an event study methodology, we further show that the skewness around the highest happiness days is significantly larger than the skewness around the lowest happiness days.

  3. Halo Pressure Profile through the Skew Cross-power Spectrum of the Sunyaev–Zel’dovich Effect and CMB Lensing in Planck

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timmons, Nicholas; Cooray, Asantha; Feng, Chang

    2017-11-01

    We measure the cosmic microwave background (CMB) skewness power spectrum in Planck , using frequency maps of the HFI instrument and the Sunyaev–Zel’dovich (SZ) component map. The two-to-one skewness power spectrum measures the cross-correlation between CMB lensing and the thermal SZ effect. We also directly measure the same cross-correlation using the Planck CMB lensing map and the SZ map and compare it to the cross-correlation derived from the skewness power spectrum. We model fit the SZ power spectrum and CMB lensing–SZ cross-power spectrum via the skewness power spectrum to constrain the gas pressure profile of dark matter halos. The gasmore » pressure profile is compared to existing measurements in the literature including a direct estimate based on the stacking of SZ clusters in Planck .« less

  4. Fast frequency domain method to detect skew in a document image

    NASA Astrophysics Data System (ADS)

    Mehta, Sunita; Walia, Ekta; Dutta, Maitreyee

    2015-12-01

    In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.

  5. Impact of radius and skew angle on areal density in heat assisted magnetic recording hard disk drives

    NASA Astrophysics Data System (ADS)

    Cordle, Michael; Rea, Chris; Jury, Jason; Rausch, Tim; Hardie, Cal; Gage, Edward; Victora, R. H.

    2018-05-01

    This study aims to investigate the impact that factors such as skew, radius, and transition curvature have on areal density capability in heat-assisted magnetic recording hard disk drives. We explore a "ballistic seek" approach for capturing in-situ scan line images of the magnetization footprint on the recording media, and extract parametric results of recording characteristics such as transition curvature. We take full advantage of the significantly improved cycle time to apply a statistical treatment to relatively large samples of experimental curvature data to evaluate measurement capability. Quantitative analysis of factors that impact transition curvature reveals an asymmetry in the curvature profile that is strongly correlated to skew angle. Another less obvious skew-related effect is an overall decrease in curvature as skew angle increases. Using conventional perpendicular magnetic recording as the reference case, we characterize areal density capability as a function of recording position.

  6. Experimental investigation of the noise emission of axial fans under distorted inflow conditions

    NASA Astrophysics Data System (ADS)

    Zenger, Florian J.; Renz, Andreas; Becher, Marcus; Becker, Stefan

    2016-11-01

    An experimental investigation on the noise emission of axial fans under distorted inflow conditions was conducted. Three fans with forward-skewed fan blades and three fans with backward-skewed fan blades and a common operating point were designed with a 2D element blade method. Two approaches were adopted to modify the inflow conditions: first, the inflow turbulence intensity was increased by two different rectangular grids and second, the inflow velocity profile was changed to an asymmetric characteristic by two grids with a distinct bar stacking. An increase in the inflow turbulence intensity affects both tonal and broadband noise, whereas a non-uniform velocity profile at the inlet influences mainly tonal components. The magnitude of this effect is not the same for all fans but is dependent on the blade skew. The impact is greater for the forward-skewed fans than for the backward-skewed and thus directly linked to the fan blade geometry.

  7. Single image super-resolution based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia

    2018-03-01

    We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.

  8. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  9. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  10. Molecular graph convolutions: moving beyond fingerprints

    NASA Astrophysics Data System (ADS)

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  11. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  12. Mean, Median, and Skew: Correcting a Textbook Rule

    ERIC Educational Resources Information Center

    von Hippel, Paul T.

    2005-01-01

    Many textbooks teach a rule of thumb stating that the mean is right of the median under right skew, and left of the median under left skew. This rule fails with surprising frequency. It can fail in multimodal distributions, or in distributions where one tail is long but the other is heavy. Most commonly, though, the rule fails in discrete…

  13. Generation of net sediment transport by velocity skewness in oscillatory sheet flow

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Li, Yong; Chen, Genfa; Wang, Fujun; Tang, Xuelin

    2018-01-01

    This study utilizes a qualitative approach and a two-phase numerical model to investigate net sediment transport caused by velocity skewness beneath oscillatory sheet flow and current. The qualitative approach is derived based on the pseudo-laminar approximation of boundary layer velocity and exponential approximation of concentration. The two-phase model can obtain well the instantaneous erosion depth, sediment flux, boundary layer thickness, and sediment transport rate. It can especially illustrate the difference between positive and negative flow stages caused by velocity skewness, which is considerably important in determining the net boundary layer flow and sediment transport direction. The two-phase model also explains the effect of sediment diameter and phase-lag to sediment transport by comparing the instantaneous-type formulas to better illustrate velocity skewness effect. In previous studies about sheet flow transport in pure velocity-skewed flows, net sediment transport is only attributed to the phase-lag effect. In the present study with the qualitative approach and two-phase model, phase-lag effect is shown important but not sufficient for the net sediment transport beneath pure velocity-skewed flow and current, while the asymmetric wave boundary layer development between positive and negative flow stages also contributes to the sediment transport.

  14. Bayesian inference for two-part mixed-effects model using skew distributions, with application to longitudinal semicontinuous alcohol data.

    PubMed

    Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie

    2017-08-01

    Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.

  15. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture

    PubMed Central

    Meszlényi, Regina J.; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network. PMID:29089883

  16. Face recognition: a convolutional neural-network approach.

    PubMed

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  17. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture.

    PubMed

    Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.

  18. Comparison of the convolution quadrature method and enhanced inverse FFT with application in elastodynamic boundary element method

    NASA Astrophysics Data System (ADS)

    Schanz, Martin; Ye, Wenjing; Xiao, Jinyou

    2016-04-01

    Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.

  19. ASIC-based architecture for the real-time computation of 2D convolution with large kernel size

    NASA Astrophysics Data System (ADS)

    Shao, Rui; Zhong, Sheng; Yan, Luxin

    2015-12-01

    Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.

  20. Performance Analysis of IEEE 802.11g TCM Waveforms Transmitted over a Channel with Pulse-Noise Interference

    DTIC Science & Technology

    2007-06-01

    17 Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code ...Hamming distance between all pairs of non-zero paths. Table 2 lists the best rate r=2/3, punctured convolutional code information weight structure dB...Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code information weight structure. (From: [12]). K freed freeB

  1. A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE

    NASA Technical Reports Server (NTRS)

    Truong, T. K.

    1994-01-01

    This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.

  2. Effects of Convoluted Divergent Flap Contouring on the Performance of a Fixed-Geometry Nonaxisymmetric Exhaust Nozzle

    NASA Technical Reports Server (NTRS)

    Asbury, Scott C.; Hunter, Craig A.

    1999-01-01

    An investigation was conducted in the model preparation area of the Langley 16-Foot Transonic Tunnel to determine the effects of convoluted divergent-flap contouring on the internal performance of a fixed-geometry, nonaxisymmetric, convergent-divergent exhaust nozzle. Testing was conducted at static conditions using a sub-scale nozzle model with one baseline and four convoluted configurations. All tests were conducted with no external flow at nozzle pressure ratios from 1.25 to approximately 9.50. Results indicate that baseline nozzle performance was dominated by unstable, shock-induced, boundary-layer separation at overexpanded conditions. Convoluted configurations were found to significantly reduce, and in some cases totally alleviate separation at overexpanded conditions. This result was attributed to the ability of convoluted contouring to energize and improve the condition of the nozzle boundary layer. Separation alleviation offers potential for installed nozzle aeropropulsive (thrust-minus-drag) performance benefits by reducing drag at forward flight speeds, even though this may reduce nozzle thrust ratio as much as 6.4% at off-design conditions. At on-design conditions, nozzle thrust ratio for the convoluted configurations ranged from 1% to 2.9% below the baseline configuration; this was a result of increased skin friction and oblique shock losses inside the nozzle.

  3. Considerations on the mechanisms of alternating skew deviation in patients with cerebellar lesions.

    PubMed

    Zee, D S

    1996-01-01

    Alternating skew deviation, in which the side of the higher eye changes depending upon whether gaze is directed to the left or the right, is a frequent sign in patients with posterior fossa lesions, including those restricted to the cerebellum. Here we propose a mechanism for alternating skews related to the otolith-ocular responses to fore and aft pitch of the head in lateral-eyed animals. In lateral-eyed animals the expected response to a static head pitch is cyclorotation of the eyes. But if the eyes are rotated horizontally in the orbit, away from the primary position, a compensatory skew deviation should also appear. The direction of the skew would depend upon whether the eyes were directed to the right (left eye forward, right eye backward) or to the left (left eye backward, right eye forward). In contrast, for frontal-eyed animals, skew deviations are counterproductive because they create diplopia and interfere with binocular vision. We attribute the emergence of skew deviations in frontal-eyed animals in pathological conditions to 1) an imbalance in otolithocular pathways and 2) a loss of the component of ocular motor innervation that normally corrects for the differences in pulling directions and strengths of the various ocular muscles as the eyes change position in the orbit. Such a compensatory mechanism is necessary to ensure optimal binocular visual function during and after head motion. This compensatory mechanism may depend upon the cerebellum.

  4. LQR Control of Shell Vibrations Via Piezoceramic Actuators

    NASA Technical Reports Server (NTRS)

    delRosario, R. C. H.; Smith, R. C.

    1997-01-01

    A model-based Linear Quadratic Regulator (LQR) method for controlling vibrations in cylindrical shells is presented. Surface-mounted piezo-ceramic patches are employed as actuators which leads to unbounded control input operators. Modified Donnell-Mushtari shell equations incorporating strong or Kelvin-Voigt damping are used to model the system. The model is then abstractly formulated in terms of sesquilinear forms. This provides a framework amenable for proving model well-posedness and convergence of LQR gains using analytic semigroup results combined with LQR theory for unbounded input operators. Finally, numerical examples demonstrating the effectiveness of the method are presented.

  5. Numerical approximation for the infinite-dimensional discrete-time optimal linear-quadratic regulator problem

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1986-01-01

    An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.

  6. On the extensible viscoelastic beam

    NASA Astrophysics Data System (ADS)

    Giorgi, Claudio; Pata, Vittorino; Vuk, Elena

    2008-04-01

    This work is focused on the equation \\[ \\begin{eqnarray*}\\fl {\\partial_{tt}} u+\\partial_{xxxx}u +\\int_0^\\infty \\mu(s) \\partial_{xxxx}[u(t)-u(t-s)]\\,\\rmd s\\\\ - \\big(\\beta+\\|\\partial_x u\\|_{L^2(0,1)}^2\\big)\\partial_{xx}u= f\\end{eqnarray*} \\] describing the motion of an extensible viscoelastic beam. Under suitable boundary conditions, the related dynamical system in the history space framework is shown to possess a global attractor of optimal regularity. The result is obtained by exploiting an appropriate decomposition of the solution semigroup, together with the existence of a Lyapunov functional.

  7. Distributed System Optimal Control and Parameter Estimation: Computational Techniques Using Spline Approximations.

    DTIC Science & Technology

    1982-04-01

    orthogonal proJec- differential equations (PDE) of hyperbolic or tion of Z onto ZN and N -pN’/PN. This parabolic type. Roughly speaking, in each results in...to choose a parameter from an sipative inequality in Z (such asə(q)ZZ> admissible set Q so as to yield a best fit < W < z,z> for z E Dom (.&(q))and .W...semigroup T(t;q). The approxi- sumed fixed and known and F in (1) is a N control input term , say F(t) = Bu(t). Then mating operators. S1(q) are defined

  8. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    PubMed

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2018-04-01

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  9. A separable two-dimensional discrete Hartley transform

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Poirson, A.

    1985-01-01

    Bracewell has proposed the Discrete Hartley Transform (DHT) as a substitute for the Discrete Fourier Transform (DFT), particularly as a means of convolution. Here, it is shown that the most natural extension of the DHT to two dimensions fails to be separate in the two dimensions, and is therefore inefficient. An alternative separable form is considered, corresponding convolution theorem is derived. That the DHT is unlikely to provide faster convolution than the DFT is also discussed.

  10. Iterative deep convolutional encoder-decoder network for medical image segmentation.

    PubMed

    Jung Uk Kim; Hak Gu Kim; Yong Man Ro

    2017-07-01

    In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.

  11. Reconfigurable Gabor Filter For Fingerprint Recognition Using FPGA Verilog

    NASA Astrophysics Data System (ADS)

    Rosshidi, H. T.; Hadi, A. R.

    2009-06-01

    This paper present the implementations of Gabor filter for fingerprint recognition using Verilog HDL. This work demonstrates the application of Gabor Filter technique to enhance the fingerprint image. The incoming signal in form of image pixel will be filter out or convolute by the Gabor filter to define the ridge and valley regions of fingerprint. This is done with the application of a real time convolve based on Field Programmable Gate Array (FPGA) to perform the convolution operation. The main characteristic of the proposed approach are the usage of memory to store the incoming image pixel and the coefficient of the Gabor filter before the convolution matrix take place. The result was the signal convoluted with the Gabor coefficient.

  12. Convolutional neural network for road extraction

    NASA Astrophysics Data System (ADS)

    Li, Junping; Ding, Yazhou; Feng, Fajie; Xiong, Baoyu; Cui, Weihong

    2017-11-01

    In this paper, the convolution neural network with large block input and small block output was used to extract road. To reflect the complex road characteristics in the study area, a deep convolution neural network VGG19 was conducted for road extraction. Based on the analysis of the characteristics of different sizes of input block, output block and the extraction effect, the votes of deep convolutional neural networks was used as the final road prediction. The study image was from GF-2 panchromatic and multi-spectral fusion in Yinchuan. The precision of road extraction was 91%. The experiments showed that model averaging can improve the accuracy to some extent. At the same time, this paper gave some advice about the choice of input block size and output block size.

  13. Symmetric convolution of asymmetric multidimensional sequences using discrete trigonometric transforms.

    PubMed

    Foltz, T M; Welsh, B M

    1999-01-01

    This paper uses the fact that the discrete Fourier transform diagonalizes a circulant matrix to provide an alternate derivation of the symmetric convolution-multiplication property for discrete trigonometric transforms. Derived in this manner, the symmetric convolution-multiplication property extends easily to multiple dimensions using the notion of block circulant matrices and generalizes to multidimensional asymmetric sequences. The symmetric convolution of multidimensional asymmetric sequences can then be accomplished by taking the product of the trigonometric transforms of the sequences and then applying an inverse trigonometric transform to the result. An example is given of how this theory can be used for applying a two-dimensional (2-D) finite impulse response (FIR) filter with nonlinear phase which models atmospheric turbulence.

  14. Molecular graph convolutions: moving beyond fingerprints

    PubMed Central

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-01-01

    Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503

  15. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  16. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  17. The generalised Sylvester matrix equations over the generalised bisymmetric and skew-symmetric matrices

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Hajarian, Masoud

    2012-08-01

    A matrix P is called a symmetric orthogonal if P = P T = P -1. A matrix X is said to be a generalised bisymmetric with respect to P if X = X T = PXP. It is obvious that any symmetric matrix is also a generalised bisymmetric matrix with respect to I (identity matrix). By extending the idea of the Jacobi and the Gauss-Seidel iterations, this article proposes two new iterative methods, respectively, for computing the generalised bisymmetric (containing symmetric solution as a special case) and skew-symmetric solutions of the generalised Sylvester matrix equation ? (including Sylvester and Lyapunov matrix equations as special cases) which is encountered in many systems and control applications. When the generalised Sylvester matrix equation has a unique generalised bisymmetric (skew-symmetric) solution, the first (second) iterative method converges to the generalised bisymmetric (skew-symmetric) solution of this matrix equation for any initial generalised bisymmetric (skew-symmetric) matrix. Finally, some numerical results are given to illustrate the effect of the theoretical results.

  18. Skewness in large-scale structure and non-Gaussian initial conditions

    NASA Technical Reports Server (NTRS)

    Fry, J. N.; Scherrer, Robert J.

    1994-01-01

    We compute the skewness of the galaxy distribution arising from the nonlinear evolution of arbitrary non-Gaussian intial conditions to second order in perturbation theory including the effects of nonlinear biasing. The result contains a term identical to that for a Gaussian initial distribution plus terms which depend on the skewness and kurtosis of the initial conditions. The results are model dependent; we present calculations for several toy models. At late times, the leading contribution from the initial skewness decays away relative to the other terms and becomes increasingly unimportant, but the contribution from initial kurtosis, previously overlooked, has the same time dependence as the Gaussian terms. Observations of a linear dependence of the normalized skewness on the rms density fluctuation therefore do not necessarily rule out initially non-Gaussian models. We also show that with non-Gaussian initial conditions the first correction to linear theory for the mean square density fluctuation is larger than for Gaussian models.

  19. Arnold-Chiari malformation and nystagmus of skew

    PubMed Central

    Pieh, C.; Gottlob, I.

    2000-01-01

    The Arnold-Chiari malfomation is typically associated with downbeat nystagmus. Eye movement recordings in two patients with Arnold-Chiari malfomation type 1 showed, in addition to downbeat and gaze evoked nystagmus, intermittent nystagmus of skew. To date this finding has not been reported in association with Arnold-Chiari malfomation. Nystagmus of skew should raise the suspicion of Arnold-Chiari malfomation and prompt sagittal head MRI examination.

 PMID:10864619

  20. An Adaptive Method for Reducing Clock Skew in an Accumulative Z-Axis Interconnect System

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary; Boyce, Lee

    1997-01-01

    This paper will present several methods for adjusting clock skew variations that occur in a n accumulative z-axis interconnect system. In such a system, delay between modules in a function of their distance from one another. Clock distribution in a high-speed system, where clock skew must be kept to a minimum, becomes more challenging when module order is variable before design.

  1. Univariate and multivariate skewness and kurtosis for measuring nonnormality: Prevalence, influence and estimation.

    PubMed

    Cain, Meghan K; Zhang, Zhiyong; Yuan, Ke-Hai

    2017-10-01

    Nonnormality of univariate data has been extensively examined previously (Blanca et al., Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 9(2), 78-84, 2013; Miceeri, Psychological Bulletin, 105(1), 156, 1989). However, less is known of the potential nonnormality of multivariate data although multivariate analysis is commonly used in psychological and educational research. Using univariate and multivariate skewness and kurtosis as measures of nonnormality, this study examined 1,567 univariate distriubtions and 254 multivariate distributions collected from authors of articles published in Psychological Science and the American Education Research Journal. We found that 74 % of univariate distributions and 68 % multivariate distributions deviated from normal distributions. In a simulation study using typical values of skewness and kurtosis that we collected, we found that the resulting type I error rates were 17 % in a t-test and 30 % in a factor analysis under some conditions. Hence, we argue that it is time to routinely report skewness and kurtosis along with other summary statistics such as means and variances. To facilitate future report of skewness and kurtosis, we provide a tutorial on how to compute univariate and multivariate skewness and kurtosis by SAS, SPSS, R and a newly developed Web application.

  2. Handling Data Skew in MapReduce Cluster by Using Partition Tuning

    PubMed

    Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai

    2017-01-01

    The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. © 2017 Yufei Gao et al.

  3. Handling Data Skew in MapReduce Cluster by Using Partition Tuning.

    PubMed

    Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai

    2017-01-01

    The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data.

  4. Handling Data Skew in MapReduce Cluster by Using Partition Tuning

    PubMed Central

    Zhou, Yanjie; Zhou, Bing; Shi, Lei

    2017-01-01

    The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. PMID:29065568

  5. A digital pixel cell for address event representation image convolution processing

    NASA Astrophysics Data System (ADS)

    Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2005-06-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.

  6. New S control chart using skewness correction method for monitoring process dispersion of skewed distributions

    NASA Astrophysics Data System (ADS)

    Atta, Abdu; Yahaya, Sharipah; Zain, Zakiyah; Ahmed, Zalikha

    2017-11-01

    Control chart is established as one of the most powerful tools in Statistical Process Control (SPC) and is widely used in industries. The conventional control charts rely on normality assumption, which is not always the case for industrial data. This paper proposes a new S control chart for monitoring process dispersion using skewness correction method for skewed distributions, named as SC-S control chart. Its performance in terms of false alarm rate is compared with various existing control charts for monitoring process dispersion, such as scaled weighted variance S chart (SWV-S); skewness correction R chart (SC-R); weighted variance R chart (WV-R); weighted variance S chart (WV-S); and standard S chart (STD-S). Comparison with exact S control chart with regards to the probability of out-of-control detections is also accomplished. The Weibull and gamma distributions adopted in this study are assessed along with the normal distribution. Simulation study shows that the proposed SC-S control chart provides good performance of in-control probabilities (Type I error) in almost all the skewness levels and sample sizes, n. In the case of probability of detection shift the proposed SC-S chart is closer to the exact S control chart than the existing charts for skewed distributions, except for the SC-R control chart. In general, the performance of the proposed SC-S control chart is better than all the existing control charts for monitoring process dispersion in the cases of Type I error and probability of detection shift.

  7. Static performance investigation of a skewed-throat multiaxis thrust-vectoring nozzle concept

    NASA Technical Reports Server (NTRS)

    Wing, David J.

    1994-01-01

    The static performance of a jet exhaust nozzle which achieves multiaxis thrust vectoring by physically skewing the geometric throat has been characterized in the static test facility of the 16-Foot Transonic Tunnel at NASA Langley Research Center. The nozzle has an asymmetric internal geometry defined by four surfaces: a convergent-divergent upper surface with its ridge perpendicular to the nozzle centerline, a convergent-divergent lower surface with its ridge skewed relative to the nozzle centerline, an outwardly deflected sidewall, and a straight sidewall. The primary goal of the concept is to provide efficient yaw thrust vectoring by forcing the sonic plane (nozzle throat) to form at a yaw angle defined by the skewed ridge of the lower surface contour. A secondary goal is to provide multiaxis thrust vectoring by combining the skewed-throat yaw-vectoring concept with upper and lower pitch flap deflections. The geometric parameters varied in this investigation included lower surface ridge skew angle, nozzle expansion ratio (divergence angle), aspect ratio, pitch flap deflection angle, and sidewall deflection angle. Nozzle pressure ratio was varied from 2 to a high of 11.5 for some configurations. The results of the investigation indicate that efficient, substantial multiaxis thrust vectoring was achieved by the skewed-throat nozzle concept. However, certain control surface deflections destabilized the internal flow field, which resulted in substantial shifts in the position and orientation of the sonic plane and had an adverse effect on thrust-vectoring and weight flow characteristics. By increasing the expansion ratio, the location of the sonic plane was stabilized. The asymmetric design resulted in interdependent pitch and yaw thrust vectoring as well as nonzero thrust-vector angles with undeflected control surfaces. By skewing the ridges of both the upper and lower surface contours, the interdependency between pitch and yaw thrust vectoring may be eliminated and the location of the sonic plane may be further stabilized.

  8. Software Communications Architecture (SCA) Compliant Software Defined Radio Design for IEEE 802.16 Wirelessman-OFDMTM Transceiver

    DTIC Science & Technology

    2006-12-01

    Convolutional encoder of rate 1/2 (From [10]). Table 3 shows the puncturing patterns used to derive the different code rates . X precedes Y in the order... convolutional code with puncturing configuration (From [10])......11 Table 4. Mandatory channel coding per modulation (From [10...a concatenation of a Reed– Solomon outer code and a rate -adjustable convolutional inner code . At the transmitter, data shall first be encoded with

  9. Synchronization Analysis and Simulation of a Standard IEEE 802.11G OFDM Signal

    DTIC Science & Technology

    2004-03-01

    Figure 26 Convolutional Encoder Parameters. Figure 27 Puncturing Parameters. As per Table 3, the required code rate is 3 4r = which requires...to achieve the higher data rates required by the Standard 802.11b was accomplished by using packet binary convolutional coding (PBCC). Essentially...higher data rates are achieved by using convolutional coding combined with BPSK or QPSK modulation. The data is first encoded with a rate one-half

  10. Design and System Implications of a Family of Wideband HF Data Waveforms

    DTIC Science & Technology

    2010-09-01

    code rates (i.e. 8/9, 9/10) will be used to attain the highest data rates for surface wave links. Very high puncturing of convolutional codes can...Communication Links”, Edition 1, North Atlantic Treaty Organization, 2009. [14] Yasuda, Y., Kashiki, K., Hirata, Y. “High- Rate Punctured Convolutional Codes ...length 7 convolutional code that has been used for over two decades in 110A. In addition, repetition coding and puncturing was

  11. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  12. Further Developments in the Communication Link and Error Analysis (CLEAN) Simulator

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1995-01-01

    During the period 1 July 1993 - 30 June 1994, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed. Many of these were reported in the Semi-Annual report dated December 1993 which has been included in this report in Appendix A. Since December 1993, a number of additional modules have been added involving Unit-Memory Convolutional codes (UMC). These are: (1) Unit-Memory Convolutional Encoder module (UMCEncd); (2) Hard decision Unit-Memory Convolutional Decoder using the Viterbi decoding algorithm (VitUMC); and (3) a number of utility modules designed to investigate the performance of LTMC's such as LTMC column distance function (UMCdc), UMC free distance function (UMCdfree), UMC row distance function (UMCdr), and UMC Transformation (UMCTrans). The study of UMC's was driven, in part, by the desire to investigate high-rate convolutional codes which are better suited as inner codes for a concatenated coding scheme. A number of high-rate LTMC's were found which are good candidates for inner codes. Besides the further developments of the simulation, a study was performed to construct a table of the best known Unit-Memory Convolutional codes. Finally, a preliminary study of the usefulness of the Periodic Convolutional Interleaver (PCI) was completed and documented in a Technical note dated March 17, 1994. This technical note has also been included in this final report.

  13. The effects of kinesio taping on the color intensity of superficial skin hematomas: A pilot study.

    PubMed

    Vercelli, Stefano; Colombo, Claudio; Tolosa, Francesca; Moriondo, Andrea; Bravini, Elisabetta; Ferriero, Giorgio; Francesco, Sartorio

    2017-01-01

    To analyze the effects of kinesio taping (KT) -applied with three different strains that induced or not the formation of skin creases (called convolutions)- on color intensity of post-surgical superficial hematomas. Single-blind paired study. Rehabilitation clinic. A convenience sample of 13 inpatients with post-surgical superficial hematomas. The tape was applied for 24 consecutive hours. Three tails of KT were randomly applied with different degrees of strain: none (SN); light (SL); and full longitudinal stretch (SF). We expected to obtain correct formation of convolutions with SL, some convolutions with SN, and no convolutions with SF. The change in color intensity of hematomas, measured by means of polar coordinates CIE L*a*b* using a validated and standardized digital images system. Applying KT to hematomas did not significantly change the color intensity in the central area under the tape (p > 0.05). There was a significant treatment effect (p < 0.05) under the edges of the tape, independently of the formation of convolutions (p > 0.05). The changes observed along the edges of the tape could be related to the formation of a pressure gradient between the KT and the adjacent area, but were not dependent on the formation of skin convolutions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Development of regional skews for selected flood durations for the Central Valley Region, California, based on data through water year 2008

    USGS Publications Warehouse

    Lamontagne, Jonathan R.; Stedinger, Jery R.; Berenbrock, Charles; Veilleux, Andrea G.; Ferris, Justin C.; Knifong, Donna L.

    2012-01-01

    Flood-frequency information is important in the Central Valley region of California because of the high risk of catastrophic flooding. Most traditional flood-frequency studies focus on peak flows, but for the assessment of the adequacy of reservoirs, levees, other flood control structures, sustained flood flow (flood duration) frequency data are needed. This study focuses on rainfall or rain-on-snow floods, rather than the annual maximum, because rain events produce the largest floods in the region. A key to estimating flood-duration frequency is determining the regional skew for such data. Of the 50 sites used in this study to determine regional skew, 28 sites were considered to have little to no significant regulated flows, and for the 22 sites considered significantly regulated, unregulated daily flow data were synthesized by using reservoir storage changes and diversion records. The unregulated, annual maximum rainfall flood flows for selected durations (1-day, 3-day, 7-day, 15-day, and 30-day) for all 50 sites were furnished by the U.S. Army Corps of Engineers. Station skew was determined by using the expected moments algorithm program for fitting the Pearson Type 3 flood-frequency distribution to the logarithms of annual flood-duration data. Bayesian generalized least squares regression procedures used in earlier studies were modified to address problems caused by large cross correlations among concurrent rainfall floods in California and to address the extensive censoring of low outliers at some sites, by using the new expected moments algorithm for fitting the LP3 distribution to rainfall flood-duration data. To properly account for these problems and to develop suitable regional-skew regression models and regression diagnostics, a combination of ordinary least squares, weighted least squares, and Bayesian generalized least squares regressions were adopted. This new methodology determined that a nonlinear model relating regional skew to mean basin elevation was the best model for each flood duration. The regional-skew values ranged from -0.74 for a flood duration of 1-day and a mean basin elevation less than 2,500 feet to values near 0 for a flood duration of 7-days and a mean basin elevation greater than 4,500 feet. This relation between skew and elevation reflects the interaction of snow and rain, which increases with increased elevation. The regional skews are more accurate, and the mean squared errors are less than in the Interagency Advisory Committee on Water Data's National skew map of Bulletin 17B.

  15. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology.

    PubMed

    Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter

    2017-11-01

    Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Towards dropout training for convolutional neural networks.

    PubMed

    Wu, Haibing; Gu, Xiaodong

    2015-11-01

    Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Frame prediction using recurrent convolutional encoder with residual learning

    NASA Astrophysics Data System (ADS)

    Yue, Boxuan; Liang, Jun

    2018-05-01

    The prediction for the frame of a video is difficult but in urgent need in auto-driving. Conventional methods can only predict some abstract trends of the region of interest. The boom of deep learning makes the prediction for frames possible. In this paper, we propose a novel recurrent convolutional encoder and DE convolutional decoder structure to predict frames. We introduce the residual learning in the convolution encoder structure to solve the gradient issues. The residual learning can transform the gradient back propagation to an identity mapping. It can reserve the whole gradient information and overcome the gradient issues in Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN). Besides, compared with the branches in CNNs and the gated structures in RNNs, the residual learning can save the training time significantly. In the experiments, we use UCF101 dataset to train our networks, the predictions are compared with some state-of-the-art methods. The results show that our networks can predict frames fast and efficiently. Furthermore, our networks are used for the driving video to verify the practicability.

  18. A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system

    NASA Astrophysics Data System (ADS)

    Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan

    2018-01-01

    This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.

  19. Resonances in a Chaotic Attractor Crisis of the Lorenz Flow

    NASA Astrophysics Data System (ADS)

    Tantet, Alexis; Lucarini, Valerio; Dijkstra, Henk A.

    2018-02-01

    Local bifurcations of stationary points and limit cycles have successfully been characterized in terms of the critical exponents of these solutions. Lyapunov exponents and their associated covariant Lyapunov vectors have been proposed as tools for supporting the understanding of critical transitions in chaotic dynamical systems. However, it is in general not clear how the statistical properties of dynamical systems change across a boundary crisis during which a chaotic attractor collides with a saddle. This behavior is investigated here for a boundary crisis in the Lorenz flow, for which neither the Lyapunov exponents nor the covariant Lyapunov vectors provide a criterion for the crisis. Instead, the convergence of the time evolution of probability densities to the invariant measure, governed by the semigroup of transfer operators, is expected to slow down at the approach of the crisis. Such convergence is described by the eigenvalues of the generator of this semigroup, which can be divided into two families, referred to as the stable and unstable Ruelle-Pollicott resonances, respectively. The former describes the convergence of densities to the attractor (or escape from a repeller) and is estimated from many short time series sampling the state space. The latter is responsible for the decay of correlations, or mixing, and can be estimated from a long times series, invoking ergodicity. It is found numerically for the Lorenz flow that the stable resonances do approach the imaginary axis during the crisis, as is indicative of the loss of global stability of the attractor. On the other hand, the unstable resonances, and a fortiori the decay of correlations, do not flag the proximity of the crisis, thus questioning the usual design of early warning indicators of boundary crises of chaotic attractors and the applicability of response theory close to such crises.

  20. Resonances in a Chaotic Attractor Crisis of the Lorenz Flow

    NASA Astrophysics Data System (ADS)

    Tantet, Alexis; Lucarini, Valerio; Dijkstra, Henk A.

    2017-12-01

    Local bifurcations of stationary points and limit cycles have successfully been characterized in terms of the critical exponents of these solutions. Lyapunov exponents and their associated covariant Lyapunov vectors have been proposed as tools for supporting the understanding of critical transitions in chaotic dynamical systems. However, it is in general not clear how the statistical properties of dynamical systems change across a boundary crisis during which a chaotic attractor collides with a saddle. This behavior is investigated here for a boundary crisis in the Lorenz flow, for which neither the Lyapunov exponents nor the covariant Lyapunov vectors provide a criterion for the crisis. Instead, the convergence of the time evolution of probability densities to the invariant measure, governed by the semigroup of transfer operators, is expected to slow down at the approach of the crisis. Such convergence is described by the eigenvalues of the generator of this semigroup, which can be divided into two families, referred to as the stable and unstable Ruelle-Pollicott resonances, respectively. The former describes the convergence of densities to the attractor (or escape from a repeller) and is estimated from many short time series sampling the state space. The latter is responsible for the decay of correlations, or mixing, and can be estimated from a long times series, invoking ergodicity. It is found numerically for the Lorenz flow that the stable resonances do approach the imaginary axis during the crisis, as is indicative of the loss of global stability of the attractor. On the other hand, the unstable resonances, and a fortiori the decay of correlations, do not flag the proximity of the crisis, thus questioning the usual design of early warning indicators of boundary crises of chaotic attractors and the applicability of response theory close to such crises.

  1. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1992-01-01

    Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.

  2. Efficient airport detection using region-based fully convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao

    2018-04-01

    This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.

  3. A proportional integral estimator-based clock synchronization protocol for wireless sensor networks.

    PubMed

    Yang, Wenlun; Fu, Minyue

    2017-11-01

    Clock synchronization is an issue of vital importance in applications of WSNs. This paper proposes a proportional integral estimator-based protocol (EBP) to achieve clock synchronization for wireless sensor networks. As each local clock skew gradually drifts, synchronization accuracy will decline over time. Compared with existing consensus-based approaches, the proposed synchronization protocol improves synchronization accuracy under time-varying clock skews. Moreover, by restricting synchronization error of clock skew into a relative small quantity, it could reduce periodic re-synchronization frequencies. At last, a pseudo-synchronous implementation for skew compensation is introduced as synchronous protocol is unrealistic in practice. Numerical simulations are shown to illustrate the performance of the proposed protocol. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Collaborative identification method for sea battlefield target based on deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zheng, Guangdi; Pan, Mingbo; Liu, Wei; Wu, Xuetong

    2018-03-01

    The target identification of the sea battlefield is the prerequisite for the judgment of the enemy in the modern naval battle. In this paper, a collaborative identification method based on convolution neural network is proposed to identify the typical targets of sea battlefields. Different from the traditional single-input/single-output identification method, the proposed method constructs a multi-input/single-output co-identification architecture based on optimized convolution neural network and weighted D-S evidence theory. The simulation results show that

  5. A convolution model for computing the far-field directivity of a parametric loudspeaker array.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2015-02-01

    This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.

  6. Estimating flood magnitude and frequency at gaged and ungaged sites on streams in Alaska and conterminous basins in Canada, based on data through water year 2012

    USGS Publications Warehouse

    Curran, Janet H.; Barth, Nancy A.; Veilleux, Andrea G.; Ourso, Robert T.

    2016-03-16

    Estimates of the magnitude and frequency of floods are needed across Alaska for engineering design of transportation and water-conveyance structures, flood-insurance studies, flood-plain management, and other water-resource purposes. This report updates methods for estimating flood magnitude and frequency in Alaska and conterminous basins in Canada. Annual peak-flow data through water year 2012 were compiled from 387 streamgages on unregulated streams with at least 10 years of record. Flood-frequency estimates were computed for each streamgage using the Expected Moments Algorithm to fit a Pearson Type III distribution to the logarithms of annual peak flows. A multiple Grubbs-Beck test was used to identify potentially influential low floods in the time series of peak flows for censoring in the flood frequency analysis.For two new regional skew areas, flood-frequency estimates using station skew were computed for stations with at least 25 years of record for use in a Bayesian least-squares regression analysis to determine a regional skew value. The consideration of basin characteristics as explanatory variables for regional skew resulted in improvements in precision too small to warrant the additional model complexity, and a constant model was adopted. Regional Skew Area 1 in eastern-central Alaska had a regional skew of 0.54 and an average variance of prediction of 0.45, corresponding to an effective record length of 22 years. Regional Skew Area 2, encompassing coastal areas bordering the Gulf of Alaska, had a regional skew of 0.18 and an average variance of prediction of 0.12, corresponding to an effective record length of 59 years. Station flood-frequency estimates for study sites in regional skew areas were then recomputed using a weighted skew incorporating the station skew and regional skew. In a new regional skew exclusion area outside the regional skew areas, the density of long-record streamgages was too sparse for regional analysis and station skew was used for all estimates. Final station flood frequency estimates for all study streamgages are presented for the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities.Regional multiple-regression analysis was used to produce equations for estimating flood frequency statistics from explanatory basin characteristics. Basin characteristics, including physical and climatic variables, were updated for all study streamgages using a geographical information system and geospatial source data. Screening for similar-sized nested basins eliminated hydrologically redundant sites, and screening for eligibility for analysis of explanatory variables eliminated regulated peaks, outburst peaks, and sites with indeterminate basin characteristics. An ordinary least‑squares regression used flood-frequency statistics and basin characteristics for 341 streamgages (284 in Alaska and 57 in Canada) to determine the most suitable combination of basin characteristics for a flood-frequency regression model and to explore regional grouping of streamgages for explaining variability in flood-frequency statistics across the study area. The most suitable model for explaining flood frequency used drainage area and mean annual precipitation as explanatory variables for the entire study area as a region. Final regression equations for estimating the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probability discharge in Alaska and conterminous basins in Canada were developed using a generalized least-squares regression. The average standard error of prediction for the regression equations for the various annual exceedance probabilities ranged from 69 to 82 percent, and the pseudo-coefficient of determination (pseudo-R2) ranged from 85 to 91 percent.The regional regression equations from this study were incorporated into the U.S. Geological Survey StreamStats program for a limited area of the State—the Cook Inlet Basin. StreamStats is a national web-based geographic information system application that facilitates retrieval of streamflow statistics and associated information. StreamStats retrieves published data for gaged sites and, for user-selected ungaged sites, delineates drainage areas from topographic and hydrographic data, computes basin characteristics, and computes flood frequency estimates using the regional regression equations.

  7. Differential models of twin correlations in skew for body-mass index (BMI).

    PubMed

    Tsang, Siny; Duncan, Glen E; Dinescu, Diana; Turkheimer, Eric

    2018-01-01

    Body Mass Index (BMI), like most human phenotypes, is substantially heritable. However, BMI is not normally distributed; the skew appears to be structural, and increases as a function of age. Moreover, twin correlations for BMI commonly violate the assumptions of the most common variety of the classical twin model, with the MZ twin correlation greater than twice the DZ correlation. This study aimed to decompose twin correlations for BMI using more general skew-t distributions. Same sex MZ and DZ twin pairs (N = 7,086) from the community-based Washington State Twin Registry were included. We used latent profile analysis (LPA) to decompose twin correlations for BMI into multiple mixture distributions. LPA was performed using the default normal mixture distribution and the skew-t mixture distribution. Similar analyses were performed for height as a comparison. Our analyses are then replicated in an independent dataset. A two-class solution under the skew-t mixture distribution fits the BMI distribution for both genders. The first class consists of a relatively normally distributed, highly heritable BMI with a mean in the normal range. The second class is a positively skewed BMI in the overweight and obese range, with lower twin correlations. In contrast, height is normally distributed, highly heritable, and is well-fit by a single latent class. Results in the replication dataset were highly similar. Our findings suggest that two distinct processes underlie the skew of the BMI distribution. The contrast between height and weight is in accord with subjective psychological experience: both are under obvious genetic influence, but BMI is also subject to behavioral control, whereas height is not.

  8. Method of estimating flood-frequency parameters for streams in Idaho

    USGS Publications Warehouse

    Kjelstrom, L.C.; Moffatt, R.L.

    1981-01-01

    Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)

  9. Low reproductive skew despite high male-biased operational sex ratio in a glass frog with paternal care.

    PubMed

    Mangold, Alexandra; Trenkwalder, Katharina; Ringler, Max; Hödl, Walter; Ringler, Eva

    2015-09-03

    Reproductive skew, the uneven distribution of reproductive success among individuals, is a common feature of many animal populations. Several scenarios have been proposed to favour either high or low levels of reproductive skew. Particularly a male-biased operational sex ratio and the asynchronous arrival of females is expected to cause high variation in reproductive success among males. Recently it has been suggested that the type of benefits provided by males (fixed vs. dilutable) could also strongly impact individual mating patterns, and thereby affecting reproductive skew. We tested this hypothesis in Hyalinobatrachium valerioi, a Neotropical glass frog with prolonged breeding and paternal care. We monitored and genetically sampled a natural population in southwestern Costa Rica during the breeding season in 2012 and performed parentage analysis of adult frogs and tadpoles to investigate individual mating frequencies, possible mating preferences, and estimate reproductive skew in males and females. We identified a polygamous mating system, where high proportions of males (69 %) and females (94 %) reproduced successfully. The variance in male mating success could largely be attributed to differences in time spent calling at the reproductive site, but not to body size or relatedness. Female H. valerioi were not choosy and mated indiscriminately with available males. Our findings support the hypothesis that dilutable male benefits - such as parental care - can favour female polyandry and maintain low levels of reproductive skew among males within a population, even in the presence of direct male-male competition and a highly male-biased operational sex ratio. We hypothesize that low male reproductive skew might be a general characteristic in prolonged breeders with paternal care.

  10. Estimating the mean and standard deviation of environmental data with below detection limit observations: Considering highly skewed data and model misspecification.

    PubMed

    Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin

    2015-11-01

    In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. The skew ray ambiguity in the analysis of videokeratoscopic data.

    PubMed

    Iskander, D Robert; Davis, Brett A; Collins, Michael J

    2007-05-01

    Skew ray ambiguity is present in most videokeratoscopic measurements when azimuthal components of the corneal curvature are not taken into account. There have been some reported studies based on theoretical predictions and measured test surfaces suggesting that skew ray ambiguity is significant for highly deformed corneas or decentered corneal measurements. However, the effect of skew ray ambiguity in ray tracing through videokeratoscopic data has not been studied in depth. We have evaluated the significance of the skew ray ambiguity and its effect on the analyzed corneal optics. This has been achieved by devising a procedure in which we compared the corneal wavefront aberrations estimated from 3D ray tracing with those determined from 2D (meridional based) estimates of the refractive power. The latter was possible due to recently developed concept of refractive Zernike power polynomials which links the refractive power domain with that of the wavefront. Simulated corneal surfaces as well as data from a range of corneas (from two different Placido disk-based videokeratoscopes) were used to find the limit at which the difference in estimated corneal wavefronts (or the corresponding refractive powers) would have clinical significance (e.g., equivalent to 0.125 D or more). The inclusion/exclusion of the skew ray in the analyses showed some differences in the results. However, the proposed procedure showed clinically significant differences only for highly deformed corneas and only for large corneal diameters. For the overwhelming majority of surfaces, the skew ray ambiguity is not a clinically significant issue in the analysis of the videokeratoscopic data indicating that the meridional processing such as that encountered in calculation of the refractive power maps is adequate.

  12. On river-floodplain interaction and hydrograph skewness

    NASA Astrophysics Data System (ADS)

    Fleischmann, Ayan S.; Paiva, Rodrigo C. D.; Collischonn, Walter; Sorribas, Mino V.; Pontes, Paulo R. M.

    2016-10-01

    Understanding hydrological processes occurring within a basin by looking at its outlet hydrograph can improve and foster comprehension of ungauged regions. In this context, we present an extensive examination of the roles that floodplains play on driving hydrograph shapes. Observations of many river hydrographs with large floodplain influence are carried out and indicate that a negative skewness of the hydrographs is present among many of them. Through a series of numerical experiments and analytical reasoning, we show how the relationship between flood wave celerity and discharge in such systems is responsible for determining the hydrograph shapes. The more water inundates the floodplains upstream of the observed point, the more negatively skewed is the observed hydrograph. A case study is performed in the Amazon River Basin, where major rivers with large floodplain attenuation (e.g., Purus, Madeira, and Juruá) are identified with higher negative skewness in the respective hydrographs. Finally, different wetland types could be distinguished by using this feature, e.g., wetlands maintained by endogenous processes, from wetlands governed by overbank flow (along river floodplains). A metric of hydrograph skewness was developed to quantify this effect, based on the time derivative of discharge. Together with the skewness concept, it may be used in other studies concerning the relevance of floodplain attenuation in large, ungauged rivers, where remote sensing data (e.g., satellite altimetry) can be very useful.

  13. Stochastic epigenetic mutations (DNA methylation) increase exponentially in human aging and correlate with X chromosome inactivation skewing in females.

    PubMed

    Gentilini, Davide; Garagnani, Paolo; Pisoni, Serena; Bacalini, Maria Giulia; Calzari, Luciano; Mari, Daniela; Vitale, Giovanni; Franceschi, Claudio; Di Blasio, Anna Maria

    2015-08-01

    In this study we applied a new analytical strategy to investigate the relations between stochastic epigenetic mutations (SEMs) and aging. We analysed methylation levels through the Infinium HumanMethylation27 and HumanMethylation450 BeadChips in a population of 178 subjects ranging from 3 to 106 years. For each CpG probe, epimutated subjects were identified as the extreme outliers with methylation level exceeding three times interquartile ranges the first quartile (Q1-(3 x IQR)) or the third quartile (Q3+(3 x IQR)). We demonstrated that the number of SEMs was low in childhood and increased exponentially during aging. Using the HUMARA method, skewing of X chromosome inactivation (XCI) was evaluated in heterozygotes women. Multivariate analysis indicated a significant correlation between log(SEMs) and degree of XCI skewing after adjustment for age (β = 0.41; confidence interval: 0.14, 0.68; p-value = 0.0053). The PATH analysis tested the complete model containing the variables: skewing of XCI, age, log(SEMs) and overall CpG methylation. After adjusting for the number of epimutations we failed to confirm the well reported correlation between skewing of XCI and aging. This evidence might suggest that the known correlation between XCI skewing and aging could not be a direct association but mediated by the number of SEMs.

  14. Non-local correlations via Wigner-Yanase skew information in two SC-qubit having mutual interaction under phase decoherence

    NASA Astrophysics Data System (ADS)

    Mohamed, Abdel-Baset A.

    2017-10-01

    An analytical solution of the master equation that describes a superconducting cavity containing two coupled superconducting charge qubits is obtained. Quantum-mechanical correlations based on Wigner-Yanase skew information, as local quantum uncertainty and uncertainty-induced quantum non-locality, are compared to the concurrence under the effects of the phase decoherence. Local quantum uncertainty exhibits sudden changes during its time evolution and revival process. Sudden death and sudden birth occur only for entanglement, depending on the initial state of the two coupled charge qubits, while the correlations of skew information does not vanish. The quantum correlations of skew information are found to be sensitive to the dephasing rate, the photons number in the cavity, the interaction strength between the two qubits, and the qubit distribution angle of the initial state. With a proper initial state, the stationary correlation of the skew information has a non-zero stationary value for a long time interval under the phase decoherence, that it may be useful in quantum information and computation processes.

  15. On the Origin of Protein Superfamilies and Superfolds

    NASA Astrophysics Data System (ADS)

    Magner, Abram; Szpankowski, Wojciech; Kihara, Daisuke

    2015-02-01

    Distributions of protein families and folds in genomes are highly skewed, having a small number of prevalent superfamiles/superfolds and a large number of families/folds of a small size. Why are the distributions of protein families and folds skewed? Why are there only a limited number of protein families? Here, we employ an information theoretic approach to investigate the protein sequence-structure relationship that leads to the skewed distributions. We consider that protein sequences and folds constitute an information theoretic channel and computed the most efficient distribution of sequences that code all protein folds. The identified distributions of sequences and folds are found to follow a power law, consistent with those observed for proteins in nature. Importantly, the skewed distributions of sequences and folds are suggested to have different origins: the skewed distribution of sequences is due to evolutionary pressure to achieve efficient coding of necessary folds, whereas that of folds is based on the thermodynamic stability of folds. The current study provides a new information theoretic framework for proteins that could be widely applied for understanding protein sequences, structures, functions, and interactions.

  16. A strategy to load balancing for non-connectivity MapReduce job

    NASA Astrophysics Data System (ADS)

    Zhou, Huaping; Liu, Guangzong; Gui, Haixia

    2017-09-01

    MapReduce has been widely used in large scale and complex datasets as a kind of distributed programming model. Original Hash partitioning function in MapReduce often results the problem of data skew when data distribution is uneven. To solve the imbalance of data partitioning, we proposes a strategy to change the remaining partitioning index when data is skewed. In Map phase, we count the amount of data which will be distributed to each reducer, then Job Tracker monitor the global partitioning information and dynamically modify the original partitioning function according to the data skew model, so the Partitioner can change the index of these partitioning which will cause data skew to the other reducer that has less load in the next partitioning process, and can eventually balance the load of each node. Finally, we experimentally compare our method with existing methods on both synthetic and real datasets, the experimental results show our strategy can solve the problem of data skew with better stability and efficiency than Hash method and Sampling method for non-connectivity MapReduce task.

  17. Electroencephalography Based Fusion Two-Dimensional (2D)-Convolution Neural Networks (CNN) Model for Emotion Recognition System.

    PubMed

    Kwon, Yea-Hoon; Shin, Sae-Byuk; Kim, Shin-Dug

    2018-04-30

    The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin response (GSR) signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.

  18. Efficient convolutional sparse coding

    DOEpatents

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  19. Multithreaded implicitly dealiased convolutions

    NASA Astrophysics Data System (ADS)

    Roberts, Malcolm; Bowman, John C.

    2018-03-01

    Implicit dealiasing is a method for computing in-place linear convolutions via fast Fourier transforms that decouples work memory from input data. It offers easier memory management and, for long one-dimensional input sequences, greater efficiency than conventional zero-padding. Furthermore, for convolutions of multidimensional data, the segregation of data and work buffers can be exploited to reduce memory usage and execution time significantly. This is accomplished by processing and discarding data as it is generated, allowing work memory to be reused, for greater data locality and performance. A multithreaded implementation of implicit dealiasing that accepts an arbitrary number of input and output vectors and a general multiplication operator is presented, along with an improved one-dimensional Hermitian convolution that avoids the loop dependency inherent in previous work. An alternate data format that can accommodate a Nyquist mode and enhance cache efficiency is also proposed.

  20. Detecting atrial fibrillation by deep convolutional neural networks.

    PubMed

    Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui

    2018-02-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Transmission problems for Mindlin–Timoshenko plates: frictional versus viscous damping mechanisms

    NASA Astrophysics Data System (ADS)

    Ferreira, Marcio V.; Muñoz Rivera, Jaime E.; Suárez, Fredy M. S.

    2018-06-01

    In this article, we make a comparative analysis of the stabilizing effect of the frictional dissipation with the dissipation produced by viscous materials of Kelvin-Voigt type both located in a part of a Mindlin-Timoshenko plate. We model these dissipative mechanisms through transmission problems and show that localized frictional damping, when effective over a strategic component of the plate, produces exponential stability of the corresponding semigroup. On the other hand, although the dissipation of Kelvin-Voigt is considered a strong dissipation, we prove that it loses its uniform stabilizing properties when localized over a component of the material and provides only a slower polynomial decay.

  2. Cauchy flights in confining potentials

    NASA Astrophysics Data System (ADS)

    Garbaczewski, Piotr

    2010-03-01

    We analyze confining mechanisms for Lévy flights evolving under an influence of external potentials. Given a stationary probability density function (pdf), we address the reverse engineering problem: design a jump-type stochastic process whose target pdf (eventually asymptotic) equals the preselected one. To this end, dynamically distinct jump-type processes can be employed. We demonstrate that one “targeted stochasticity” scenario involves Langevin systems with a symmetric stable noise. Another derives from the Lévy-Schrödinger semigroup dynamics (closely linked with topologically induced super-diffusions), which has no standard Langevin representation. For computational and visualization purposes, the Cauchy driver is employed to exemplify our considerations.

  3. Computational methods for optimal linear-quadratic compensators for infinite dimensional discrete-time systems

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1986-01-01

    An abstract approximation theory and computational methods are developed for the determination of optimal linear-quadratic feedback control, observers and compensators for infinite dimensional discrete-time systems. Particular attention is paid to systems whose open-loop dynamics are described by semigroups of operators on Hilbert spaces. The approach taken is based on the finite dimensional approximation of the infinite dimensional operator Riccati equations which characterize the optimal feedback control and observer gains. Theoretical convergence results are presented and discussed. Numerical results for an example involving a heat equation with boundary control are presented and used to demonstrate the feasibility of the method.

  4. The recurrence sequences via Sylvester matrices

    NASA Astrophysics Data System (ADS)

    Karaduman, Erdal; Deveci, Ömür

    2017-07-01

    In this work, we define the Pell-Jacobsthal-Slyvester sequence and the Jacobsthal-Pell-Slyvester sequence by using the Slyvester matrices which are obtained from the characteristic polynomials of the Pell and Jacobsthal sequences and then, we study the sequences defined modulo m. Also, we obtain the cyclic groups and the semigroups from the generating matrices of these sequences when read modulo m and then, we derive the relationships among the orders of the cyclic groups and the periods of the sequences. Furthermore, we redefine Pell-Jacobsthal-Slyvester sequence and the Jacobsthal-Pell-Slyvester sequence by means of the elements of the groups and then, we examine them in the finite groups.

  5. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    PubMed

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  6. Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network.

    PubMed

    Urtnasan, Erdenebayar; Park, Jong-Uk; Joo, Eun-Yeon; Lee, Kyoung-Joung

    2018-04-23

    In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F 1 -score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.

  7. Convolutional virtual electric field for image segmentation using active contours.

    PubMed

    Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden

    2014-01-01

    Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images.

  8. Improved convolutional coding

    NASA Technical Reports Server (NTRS)

    Doland, G. D.

    1970-01-01

    Convolutional coding, used to upgrade digital data transmission under adverse signal conditions, has been improved by a method which ensures data transitions, permitting bit synchronizer operation at lower signal levels. Method also increases decoding ability by removing ambiguous condition.

  9. Prototyping and Characterization of an Adjustable Skew Angle Single Gimbal Control Moment Gyroscope

    DTIC Science & Technology

    2015-03-01

    performance, and an analysis of the test results is provided. In addition to the standard battery of CMG performance tests that were planned, a...objectives for this new CMG is to provide comparable performance to the Andrews CMGs, the values in Table 1 will be used for output torque comparison...essentially fixed at 53.4°. This specific skew angle value is not the problem, as this is one commonly used CMG skew angle for satellite systems. The real

  10. A note on `Analysis of gamma-ray burst duration distribution using mixtures of skewed distributions'

    NASA Astrophysics Data System (ADS)

    Kwong, Hok Shing; Nadarajah, Saralees

    2018-01-01

    Tarnopolski [Monthly Notices of the Royal Astronomical Society, 458 (2016) 2024-2031] analysed data sets on gamma-ray burst durations using skew distributions. He showed that the best fits are provided by two skew normal and three Gaussian distributions. Here, we suggest other distributions, including some that are heavy tailed. At least one of these distributions is shown to provide better fits than those considered in Tarnopolski. Five criteria are used to assess best fits.

  11. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  12. Performance Evaluation of UHF Fading Satellite Channel by Simulation for Different Modulation Schemes

    DTIC Science & Technology

    1992-12-01

    views expressed in this thesis are those of the author end do net reflect olicsia policy or pokletsm of the Deperteaset of Defame or the US...utempl u v= cncd (2,1,6,G64,u,zeros(l,12));%Convolutional encoding mm=bm(2,v); %Binary to M-ary conversion clear v u; mm=inter(50,200,mm);%Interleaving (50...save result err B. CNCD.X (CONVOLUTIONAL ENCODER FUNCTION) function (v,vr] - cncd (n,k,m,Gr,u,r) % CONVOLUTIONAL ENCODER % Paul H. Moose % Naval

  13. Time history solution program, L225 (TEV126). Volume 1: Engineering and usage

    NASA Technical Reports Server (NTRS)

    Kroll, R. I.; Tornallyay, A.; Clemmons, R. E.

    1979-01-01

    Volume 1 of a two volume document is presented. The usage of the convolution program L225 (TEV 126) is described. The program calculates the time response of a linear system by convoluting the impulsive response function with the time-dependent excitation function. The convolution is performed as a multiplication in the frequency domain. Fast Fourier transform techniques are used to transform the product back into the time domain to obtain response time histories. A brief description of the analysis used is presented.

  14. Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.

  15. Simulation of ICD-9 to ICD-10-CM Transition for Family Medicine: Simple or Convoluted?

    PubMed

    Grief, Samuel N; Patel, Jesal; Kochendorfer, Karl M; Green, Lee A; Lussier, Yves A; Li, Jianrong; Burton, Michael; Boyd, Andrew D

    2016-01-01

    The objective of this study was to examine the impact of the transition from International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM), to Interactional Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM), on family medicine and to identify areas where additional training might be required. Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million in claims). Using the science of networks, we evaluated each ICD-9-CM code used by family medicine physicians to determine whether the transition was simple or convoluted. A simple transition is defined as 1 ICD-9-CM code mapping to 1 ICD-10-CM code, or 1 ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is nonreciprocal and complex, with multiple codes for which definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Of the 1635 diagnosis codes used by family medicine physicians, 70% of the codes were categorized as simple, 27% of codes were convoluted, and 3% had no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims was similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only <0.1% of the overall diagnosis codes. The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, and for which additional resources need to be invested to ensure a successful transition to ICD-10-CM. © Copyright 2016 by the American Board of Family Medicine.

  16. Simulation of ICD-9 to ICD-10-CM transition for family medicine: simple or convoluted?

    PubMed Central

    Grief, Samuel N.; Patel, Jesal; Lussier, Yves A.; Li, Jianrong; Burton, Michael; Boyd, Andrew D.

    2017-01-01

    Objectives The objective of this study was to examine the impact of the transition from International Classification of Disease Version Nine Clinical Modification (ICD-9-CM) to Interactional Classification of Disease Version Ten Clinical Modification (ICD-10-CM) on family medicine and identify areas where additional training might be required. Methods Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million dollars in claims). Using the science of networks we evaluated each ICD-9-CM code used by family medicine physicians to determine if the transition was simple or convoluted.1 A simple translation is defined as one ICD-9-CM code mapping to one ICD-10-CM code or one ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is non-reciprocal and complex with multiple codes where definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Results Of the 1635 diagnosis codes used by the family medicine physicians, 70% of the codes were categorized as simple, 27% of the diagnosis codes were convoluted and 3% were found to have no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims were similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only < 0.1% of the overall diagnosis codes. Conclusions The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, where additional resources need to be invested to ensure a successful transition to ICD-10-CM. PMID:26769875

  17. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1994-01-01

    Brief summaries of research in the following areas are presented: (1) construction of optimum geometrically uniform trellis codes; (2) a statistical approach to constructing convolutional code generators; and (3) calculating the exact performance of a convolutional code.

  18. a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.

    2018-04-01

    Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.

  19. Detection of prostate cancer on multiparametric MRI

    NASA Astrophysics Data System (ADS)

    Seah, Jarrel C. Y.; Tang, Jennifer S. N.; Kitchen, Andy

    2017-03-01

    In this manuscript, we describe our approach and methods to the ProstateX challenge, which achieved an overall AUC of 0.84 and the runner-up position. We train a deep convolutional neural network to classify lesions marked on multiparametric MRI of the prostate as clinically significant or not. We implement a novel addition to the standard convolutional architecture described as auto-windowing which is clinically inspired and designed to overcome some of the difficulties faced in MRI interpretation, where high dynamic ranges and low contrast edges may cause difficulty for traditional convolutional neural networks trained on high contrast natural imagery. We demonstrate that this system can be trained end to end and outperforms a similar architecture without such additions. Although a relatively small training set was provided, we use extensive data augmentation to prevent overfitting and transfer learning to improve convergence speed, showing that deep convolutional neural networks can be feasibly trained on small datasets.

  20. No-reference image quality assessment based on statistics of convolution feature maps

    NASA Astrophysics Data System (ADS)

    Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo

    2018-04-01

    We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.

  1. Sensitivity Kernels for the Cross-Convolution Measure: Eliminate the Source in Waveform Tomography

    NASA Astrophysics Data System (ADS)

    Menke, W. H.

    2017-12-01

    We use the adjoint method to derive sensitivity kernels for the cross-convolution measure, a goodness-of-fit criterion that is applicable to seismic data containing closely-spaced multiple arrivals, such as reverberating compressional waves and split shear waves. In addition to a general formulation, specific expressions for sensitivity with respect to density, Lamé parameter and shear modulus are derived for a isotropic elastic solid. As is typical of adjoint methods, the kernels depend upon an adjoint field, the source of which, in this case, is the reference displacement field, pre-multiplied by a matrix of cross-correlations of components of the observed field. We use a numerical simulation to evaluate the resolving power of a topographic inversion that employs the cross-convolution measure. The estimated resolving kernel shows is point-like, indicating that the cross-convolution measure will perform well in waveform tomography settings.

  2. Skewed highway bridges.

    DOT National Transportation Integrated Search

    2013-07-01

    Many highway bridges are skewed and their behavior and corresponding design analysis need to be furthered to fully accomplish design objectives. This project used physical-test and detailed finite element analysis to better understand the behavior of...

  3. LOOKING WEST, BETWEEN READING DEPOT BRIDGE AND SKEW ARCH BRIDGE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LOOKING WEST, BETWEEN READING DEPOT BRIDGE AND SKEW ARCH BRIDGE (HAER No. PA-116). - Philadelphia & Reading Railroad, Reading Depot Bridge, North Sixth Street at Woodward Street, Reading, Berks County, PA

  4. Dichotomisation using a distributional approach when the outcome is skewed.

    PubMed

    Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L

    2015-04-24

    Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.

  5. Enhancing tumor apparent diffusion coefficient histogram skewness stratifies the postoperative survival in recurrent glioblastoma multiforme patients undergoing salvage surgery.

    PubMed

    Zolal, Amir; Juratli, Tareq A; Linn, Jennifer; Podlesek, Dino; Sitoci Ficici, Kerim Hakan; Kitzler, Hagen H; Schackert, Gabriele; Sobottka, Stephan B; Rieger, Bernhard; Krex, Dietmar

    2016-05-01

    Objective To determine the value of apparent diffusion coefficient (ADC) histogram parameters for the prediction of individual survival in patients undergoing surgery for recurrent glioblastoma (GBM) in a retrospective cohort study. Methods Thirty-one patients who underwent surgery for first recurrence of a known GBM between 2008 and 2012 were included. The following parameters were collected: age, sex, enhancing tumor size, mean ADC, median ADC, ADC skewness, ADC kurtosis and fifth percentile of the ADC histogram, initial progression free survival (PFS), extent of second resection and further adjuvant treatment. The association of these parameters with survival and PFS after second surgery was analyzed using log-rank test and Cox regression. Results Using log-rank test, ADC histogram skewness of the enhancing tumor was significantly associated with both survival (p = 0.001) and PFS after second surgery (p = 0.005). Further parameters associated with prolonged survival after second surgery were: gross total resection at second surgery (p = 0.026), tumor size (0.040) and third surgery (p = 0.003). In the multivariate Cox analysis, ADC histogram skewness was shown to be an independent prognostic factor for survival after second surgery. Conclusion ADC histogram skewness of the enhancing lesion, enhancing lesion size, third surgery, as well as gross total resection have been shown to be associated with survival following the second surgery. ADC histogram skewness was an independent prognostic factor for survival in the multivariate analysis.

  6. Dimensionality-varied convolutional neural network for spectral-spatial classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Liu, Wanjun; Liang, Xuejian; Qu, Haicheng

    2017-11-01

    Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.

  7. On the Empirical Importance of the Conditional Skewness Assumption in Modelling the Relationship between Risk and Return

    NASA Astrophysics Data System (ADS)

    Pipień, M.

    2008-09-01

    We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.

  8. The role of semantics, pre-emption and skew in linguistic distributions: the case of the un-construction

    PubMed Central

    Ibbotson, Paul

    2013-01-01

    We use the Google Ngram database, a corpus of 5,195,769 digitized books containing ~4% of all books ever published, to test three ideas that are hypothesized to account for linguistic generalizations: verbal semantics, pre-emption and skew. Using 828,813 tokens of un-forms as a test case for these mechanisms, we found verbal semantics was a good predictor of the frequency of un-forms in the English language over the past 200 years—both in terms of how the frequency changed over time and their frequency rank. We did not find strong evidence for the direct competition of un-forms and their top pre-emptors, however the skew of the un-construction competitors was inversely correlated with the acceptability of the un-form. We suggest a cognitive explanation for this, namely, that the more the set of relevant pre-emptors is skewed then the more easily it is retrieved from memory. This suggests that it is not just the frequency of pre-emptive forms that must be taken into account when trying to explain usage patterns but their skew as well. PMID:24399991

  9. Cross-frame connection details for skewed steel bridges.

    DOT National Transportation Integrated Search

    2010-10-30

    This report documents a research investigation on connection details and bracing layouts for stability : bracing of steel bridges with skewed supports. Cross-frames and diaphragms play an important role in stabilizing : steel girders, particularly du...

  10. Skew chicane based betatron eigenmode exchange module

    DOEpatents

    Douglas, David

    2010-12-28

    A skewed chicane eigenmode exchange module (SCEEM) that combines in a single beamline segment the separate functionalities of a skew quad eigenmode exchange module and a magnetic chicane. This module allows the exchange of independent betatron eigenmodes, alters electron beam orbit geometry, and provides longitudinal parameter control with dispersion management in a single beamline segment with stable betatron behavior. It thus reduces the spatial requirements for multiple beam dynamic functions, reduces required component counts and thus reduces costs, and allows the use of more compact accelerator configurations than prior art design methods.

  11. Development of an LSI maximum-likelihood convolutional decoder for advanced forward error correction capability on the NASA 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Clark, R. T.; Mccallister, R. D.

    1982-01-01

    The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.

  12. A unitary convolution approximation for the impact-parameter dependent electronic energy loss

    NASA Astrophysics Data System (ADS)

    Schiwietz, G.; Grande, P. L.

    1999-06-01

    In this work, we propose a simple method to calculate the impact-parameter dependence of the electronic energy loss of bare ions for all impact parameters. This perturbative convolution approximation (PCA) is based on first-order perturbation theory, and thus, it is only valid for fast particles with low projectile charges. Using Bloch's stopping-power result and a simple scaling, we get rid of the restriction to low charge states and derive the unitary convolution approximation (UCA). Results of the UCA are then compared with full quantum-mechanical coupled-channel calculations for the impact-parameter dependent electronic energy loss.

  13. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.

    1976-01-01

    The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.

  14. On the application of a fast polynomial transform and the Chinese remainder theorem to compute a two-dimensional convolution

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Lipes, R.; Reed, I. S.; Wu, C.

    1980-01-01

    A fast algorithm is developed to compute two dimensional convolutions of an array of d sub 1 X d sub 2 complex number points, where d sub 2 = 2(M) and d sub 1 = 2(m-r+) for some 1 or = r or = m. This algorithm requires fewer multiplications and about the same number of additions as the conventional fast fourier transform method for computing the two dimensional convolution. It also has the advantage that the operation of transposing the matrix of data can be avoided.

  15. Cascaded K-means convolutional feature learner and its application to face recognition

    NASA Astrophysics Data System (ADS)

    Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu

    2017-09-01

    Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.

  16. Efficient Modeling of Gravity Fields Caused by Sources with Arbitrary Geometry and Arbitrary Density Distribution

    NASA Astrophysics Data System (ADS)

    Wu, Leyuan

    2018-01-01

    We present a brief review of gravity forward algorithms in Cartesian coordinate system, including both space-domain and Fourier-domain approaches, after which we introduce a truly general and efficient algorithm, namely the convolution-type Gauss fast Fourier transform (Conv-Gauss-FFT) algorithm, for 2D and 3D modeling of gravity potential and its derivatives due to sources with arbitrary geometry and arbitrary density distribution which are defined either by discrete or by continuous functions. The Conv-Gauss-FFT algorithm is based on the combined use of a hybrid rectangle-Gaussian grid and the fast Fourier transform (FFT) algorithm. Since the gravity forward problem in Cartesian coordinate system can be expressed as continuous convolution-type integrals, we first approximate the continuous convolution by a weighted sum of a series of shifted discrete convolutions, and then each shifted discrete convolution, which is essentially a Toeplitz system, is calculated efficiently and accurately by combining circulant embedding with the FFT algorithm. Synthetic and real model tests show that the Conv-Gauss-FFT algorithm can obtain high-precision forward results very efficiently for almost any practical model, and it works especially well for complex 3D models when gravity fields on large 3D regular grids are needed.

  17. A convolutional neural network to filter artifacts in spectroscopic MRI.

    PubMed

    Gurbani, Saumya S; Schreibmann, Eduard; Maudsley, Andrew A; Cordova, James Scott; Soher, Brian J; Poptani, Harish; Verma, Gaurav; Barker, Peter B; Shim, Hyunsuk; Cooper, Lee A D

    2018-03-09

    Proton MRSI is a noninvasive modality capable of generating volumetric maps of in vivo tissue metabolism without the need for ionizing radiation or injected contrast agent. Magnetic resonance spectroscopic imaging has been shown to be a viable imaging modality for studying several neuropathologies. However, a key hurdle in the routine clinical adoption of MRSI is the presence of spectral artifacts that can arise from a number of sources, possibly leading to false information. A deep learning model was developed that was capable of identifying and filtering out poor quality spectra. The core of the model used a tiled convolutional neural network that analyzed frequency-domain spectra to detect artifacts. When compared with a panel of MRS experts, our convolutional neural network achieved high sensitivity and specificity with an area under the curve of 0.95. A visualization scheme was implemented to better understand how the convolutional neural network made its judgement on single-voxel or multivoxel MRSI, and the convolutional neural network was embedded into a pipeline capable of producing whole-brain spectroscopic MRI volumes in real time. The fully automated method for assessment of spectral quality provides a valuable tool to support clinical MRSI or spectroscopic MRI studies for use in fields such as adaptive radiation therapy planning. © 2018 International Society for Magnetic Resonance in Medicine.

  18. Accelerated Cartesian expansion (ACE) based framework for the rapid evaluation of diffusion, lossy wave, and Klein-Gordon potentials

    DOE PAGES

    Baczewski, Andrew David; Vikram, Melapudi; Shanker, Balasubramaniam; ...

    2010-08-27

    Diffusion, lossy wave, and Klein–Gordon equations find numerous applications in practical problems across a range of diverse disciplines. The temporal dependence of all three Green’s functions are characterized by an infinite tail. This implies that the cost complexity of the spatio-temporal convolutions, associated with evaluating the potentials, scales as O(N s 2N t 2), where N s and N t are the number of spatial and temporal degrees of freedom, respectively. In this paper, we discuss two new methods to rapidly evaluate these spatio-temporal convolutions by exploiting their block-Toeplitz nature within the framework of accelerated Cartesian expansions (ACE). The firstmore » scheme identifies a convolution relation in time amongst ACE harmonics and the fast Fourier transform (FFT) is used for efficient evaluation of these convolutions. The second method exploits the rank deficiency of the ACE translation operators with respect to time and develops a recursive numerical compression scheme for the efficient representation and evaluation of temporal convolutions. It is shown that the cost of both methods scales as O(N sN tlog 2N t). Furthermore, several numerical results are presented for the diffusion equation to validate the accuracy and efficacy of the fast algorithms developed here.« less

  19. Assessment of treatment response during chemoradiation therapy for pancreatic cancer based on quantitative radiomic analysis of daily CTs: An exploratory study.

    PubMed

    Chen, Xiaojian; Oshima, Kiyoko; Schott, Diane; Wu, Hui; Hall, William; Song, Yingqiu; Tao, Yalan; Li, Dingjie; Zheng, Cheng; Knechtges, Paul; Erickson, Beth; Li, X Allen

    2017-01-01

    In an effort for early assessment of treatment response, we investigate radiation induced changes in quantitative CT features of tumor during the delivery of chemoradiation therapy (CRT) for pancreatic cancer. Diagnostic-quality CT data acquired daily during routine CT-guided CRT using a CT-on-rails for 20 pancreatic head cancer patients were analyzed. On each daily CT, the pancreatic head, the spinal cord and the aorta were delineated and the histograms of CT number (CTN) in these contours were extracted. Eight histogram-based radiomic metrics including the mean CTN (MCTN), peak position, volume, standard deviation (SD), skewness, kurtosis, energy and entropy were calculated for each fraction. Paired t-test was used to check the significance of the change of specific metric at specific time. GEE model was used to test the association between changes of metrics over time for different pathology responses. In general, CTN histogram in the pancreatic head (but not in spinal cord) changed during the CRT delivery. Changes from the 1st to the 26th fraction in MCTN ranged from -15.8 to 3.9 HU with an average of -4.7 HU (p<0.001). Meanwhile the volume decreased, the skewness increased (less skewed), and the kurtosis decreased (less peaked). The changes of MCTN, volume, skewness, and kurtosis became significant after two weeks of treatment. Patient pathological response is associated with the changes of MCTN, SD, and skewness. In cases of good response, patients tend to have large reductions in MCTN and skewness, and large increases in SD and kurtosis. Significant changes in CT radiomic features, such as the MCTN, skewness, and kurtosis in tumor were observed during the course of CRT for pancreas cancer based on quantitative analysis of daily CTs. These changes may be potentially used for early assessment of treatment response and stratification for therapeutic intensification.

  20. Determining collective barrier operation skew in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faraj, Daniel A.

    2015-11-24

    Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by:more » identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.« less

  1. Determining collective barrier operation skew in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faraj, Daniel A.

    Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by:more » identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.« less

  2. Auto-Calibration and Fault Detection and Isolation of Skewed Redundant Accelerometers in Measurement While Drilling Systems.

    PubMed

    Seyed Moosavi, Seyed Mohsen; Moaveni, Bijan; Moshiri, Behzad; Arvan, Mohammad Reza

    2018-02-27

    The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors.

  3. Auto-Calibration and Fault Detection and Isolation of Skewed Redundant Accelerometers in Measurement While Drilling Systems

    PubMed Central

    Seyed Moosavi, Seyed Mohsen; Moshiri, Behzad; Arvan, Mohammad Reza

    2018-01-01

    The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors. PMID:29495434

  4. Enhanced line integral convolution with flow feature detection

    DOT National Transportation Integrated Search

    1995-01-01

    Prepared ca. 1995. The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain [Cabral & Leedom '93]. The method produces a flow texture imag...

  5. Inbreeding avoidance and female mate choice shape reproductive skew in capuchin monkeys (Cebus capucinus imitator).

    PubMed

    Wikberg, Eva C; Jack, Katharine M; Fedigan, Linda M; Campos, Fernando A; Yashima, Akiko S; Bergstrom, Mackenzie L; Hiwatashi, Tomohide; Kawamura, Shoji

    2017-01-01

    Reproductive skew in multimale groups may be determined by the need for alpha males to offer reproductive opportunities as staying incentives to subordinate males (concessions), by the relative fighting ability of the alpha male (tug-of-war) or by how easily females can be monopolized (priority-of-access). These models have rarely been investigated in species with exceptionally long male tenures, such as white-faced capuchins, where female mate choice for novel unrelated males may be important in shaping reproductive skew. We investigated reproductive skew in white-faced capuchins at Sector Santa Rosa, Costa Rica, using 20 years of demographic, behavioural and genetic data. Infant survival and alpha male reproductive success were highest in small multimale groups, which suggests that the presence of subordinate males can be beneficial to the alpha male, in line with the concession model's assumptions. None of the skew models predicted the observed degree of reproductive sharing, and the probability of an alpha male producing offspring was not affected by his relatedness to subordinate males, whether he resided with older subordinate males, whether he was prime aged, the number of males or females in the group or the number of infants conceived within the same month. Instead, the alpha male's probability of producing offspring decreased when he was the sire of the mother, was weak and lacked a well-established position and had a longer tenure. Because our data best supported the inbreeding avoidance hypothesis and female choice for strong novel mates, these hypotheses should be taken into account in future skew models. © 2016 John Wiley & Sons Ltd.

  6. Thermal response of a highly skewed integral bridge.

    DOT National Transportation Integrated Search

    2012-06-01

    The purpose of this study was to conduct a field evaluation of a highly skewed semi-integral bridge in order to provide : feedback regarding some of the assumptions behind the design guidelines developed by the Virginia Department of : Transportation...

  7. Theoretical and field experimental evaluation of skewed modular slab bridges.

    DOT National Transportation Integrated Search

    2012-12-01

    As a result of longitudinal cracking discovered in the concrete overlays of some recently-built skewed : bridges, the Maryland State Highway Administration (SHA) requested that this research project be : conducted for two purposes: (1) to determine t...

  8. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing

    PubMed Central

    Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; LeCun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2012-01-01

    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons. PMID:22518097

  9. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing.

    PubMed

    Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; Lecun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2012-01-01

    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons.

  10. LQR Control of Thin Shell Dynamics: Formulation and Numerical Implementation

    NASA Technical Reports Server (NTRS)

    delRosario, R. C. H.; Smith, R. C.

    1997-01-01

    A PDE-based feedback control method for thin cylindrical shells with surface-mounted piezoceramic actuators is presented. Donnell-Mushtari equations modified to incorporate both passive and active piezoceramic patch contributions are used to model the system dynamics. The well-posedness of this model and the associated LQR problem with an unbounded input operator are established through analytic semigroup theory. The model is discretized using a Galerkin expansion with basis functions constructed from Fourier polynomials tensored with cubic splines, and convergence criteria for the associated approximate LQR problem are established. The effectiveness of the method for attenuating the coupled longitudinal, circumferential and transverse shell displacements is illustrated through a set of numerical examples.

  11. Vibrating Systems with Singular Mass-Inertia Matrices

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1996-01-01

    Vibrating systems with singular mass-inertia matrices arise in recent continuum models of Smart Structures (beams with PZT strips) in assessing the damping attainable with rate feedback. While they do not quite yield 'distributed' controls, we show that they can provide a fixed nonzero lower bound for the damping coefficient at all mode frequencies. The mathematical machinery for modelling the motion involves the theory of Semigroups of Operators. We consider a Timoshenko model for torsion only, a 'smart string,' where the damping coefficient turns out to be a constant at all frequencies. We also observe that the damping increases initially with the feedback gain but decreases to zero eventually as the gain increases without limit.

  12. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1982-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.

  13. Distribution-valued initial data for the complex Ginzburg-Landau equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levermore, C.D.; Oliver, M.

    1997-11-01

    The generalized complex Ginzburg-Landau (CGL) equation with a nonlinearity of order 2{sigma} + 1 in d spatial dimensions has a unique local classical solution for distributional initial data in the Sobolev space H{sup q} provided that q > d/2 - 1/{sigma}. This result directly corresponds to a theorem for the nonlinear Schroedinger (NLS) equation which has been proved by Cazenave and Weissler in 1990. While the proof in the NLS case relies on Besov space techniques, it is shown here that for the CGL equation, the smoothing properties of the linear semigroup can be eased to obtain an almost optimalmore » result by elementary means. 1 fig.« less

  14. The decoding of majority-multiplexed signals by means of dyadic convolution

    NASA Astrophysics Data System (ADS)

    Losev, V. V.

    1980-09-01

    The maximum likelihood method can often not be used for the decoding of majority-multiplexed signals because of the large number of computations required. This paper describes a fast dyadic convolution transform which can be used to reduce the number of computations.

  15. Directional Radiometry and Radiative Transfer: the Convoluted Path From Centuries-old Phenomenology to Physical Optics

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.

    2014-01-01

    This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.

  16. [Application of numerical convolution in in vivo/in vitro correlation research].

    PubMed

    Yue, Peng

    2009-01-01

    This paper introduced the conception and principle of in vivo/in vitro correlation (IVIVC) and convolution/deconvolution methods, and elucidated in details the convolution strategy and method for calculating the in vivo absorption performance of the pharmaceutics according to the their pharmacokinetic data in Excel, then put the results forward to IVIVC research. Firstly, the pharmacokinetic data ware fitted by mathematical software to make up the lost points. Secondly, the parameters of the optimal fitted input function were defined by trail-and-error method according to the convolution principle in Excel under the hypothesis that all the input functions fit the Weibull functions. Finally, the IVIVC between in vivo input function and the in vitro dissolution was studied. In the examples, not only the application of this method was demonstrated in details but also its simplicity and effectiveness were proved by comparing with the compartment model method and deconvolution method. It showed to be a powerful tool for IVIVC research.

  17. DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations.

    PubMed

    Kruthiventi, Srinivas S S; Ayush, Kumar; Babu, R Venkatesh

    2017-09-01

    Understanding and predicting the human visual attention mechanism is an active area of research in the fields of neuroscience and computer vision. In this paper, we propose DeepFix, a fully convolutional neural network, which models the bottom-up mechanism of visual attention via saliency prediction. Unlike classical works, which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts the saliency map in an end-to-end manner. DeepFix is designed to capture semantics at multiple scales while taking global context into account, by using network layers with very large receptive fields. Generally, fully convolutional nets are spatially invariant-this prevents them from modeling location-dependent patterns (e.g., centre-bias). Our network handles this by incorporating a novel location-biased convolutional layer. We evaluate our model on multiple challenging saliency data sets and show that it achieves the state-of-the-art results.

  18. Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network.

    PubMed

    Yoon, Jaehong; Lee, Jungnyun; Whang, Mincheol

    2018-01-01

    Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain-computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.

  19. Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network

    PubMed Central

    2018-01-01

    Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain–computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.

  20. Image inpainting and super-resolution using non-local recursive deep convolutional network with skip connections

    NASA Astrophysics Data System (ADS)

    Liu, Miaofeng

    2017-07-01

    In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously

  1. Convolutional encoding of self-dual codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1994-01-01

    There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.

  2. Field measurements on skewed semi-integral bridge with elastic inclusion : instrumentation report.

    DOT National Transportation Integrated Search

    2006-01-01

    This project was designed to enhance the Virginia Department of Transportation's expertise in the design of integral bridges, particularly as it applies to highly skewed structures. Specifically, the project involves extensive monitoring of a semi-in...

  3. INVESTIGATION OF SEISMIC PERFORMANCE AND DESIGN OF TYPICAL CURVED AND SKEWED BRIDGES IN COLORADO

    DOT National Transportation Integrated Search

    2018-01-15

    This report summarizes the analytical studies on the seismic performance of typical Colorado concrete bridges, particularly those with curved and skewed configurations. A set of bridge models with different geometric configurations derived from a pro...

  4. Effect of implementing lean-on bracing in skewed steel I-girder bridges.

    DOT National Transportation Integrated Search

    2016-09-01

    Skew of the supports in steel I-girder bridges cause undesirable torsional effects, increase cross-frame forces, and generally increase the difficulty of designing and : constructing a bridge. The girders experience differential deflections due to th...

  5. Systems of Differential Equations with Skew-Symmetric, Orthogonal Matrices

    ERIC Educational Resources Information Center

    Glaister, P.

    2008-01-01

    The solution of a system of linear, inhomogeneous differential equations is discussed. The particular class considered is where the coefficient matrix is skew-symmetric and orthogonal, and where the forcing terms are sinusoidal. More general matrices are also considered.

  6. Evaluation of selected warning signs at skewed railroad-highway crossings.

    DOT National Transportation Integrated Search

    1986-01-01

    A 1984 study by the Research Council recommended that advance warning signs be placed in advance of skewed railroad-highway grade crossings. Several signs were suggested for use, and the study reported here was undertaken to determine the effectivene...

  7. Latitudinal variation in the shape of the species body size distribution: an analysis using freshwater fishes.

    PubMed

    Knouft, Jason H

    2004-05-01

    Many taxonomic and ecological assemblages of species exhibit a right-skewed body size-frequency distribution when characterized at a regional scale. Although this distribution has been frequently described, factors influencing geographic variation in the distribution are not well understood, nor are mechanisms responsible for distribution shape. In this study, variation in the species body size-frequency distributions of 344 regional communities of North American freshwater fishes is examined in relation to latitude, species richness, and taxonomic composition. Although the distribution of all species of North American fishes is right-skewed, a negative correlation exists between latitude and regional community size distribution skewness, with size distributions becoming left-skewed at high latitudes. This relationship is not an artifact of the confounding relationship between latitude and species richness in North American fishes. The negative correlation between latitude and regional community size distribution skewness is partially due to the geographic distribution of families of fishes and apparently enhanced by a nonrandom geographic distribution of species within families. These results are discussed in the context of previous explanations of factors responsible for the generation of species size-frequency distributions related to the fractal nature of the environment, energetics, and evolutionary patterns of body size in North American fishes.

  8. Spectral interpolation - Zero fill or convolution. [image processing

    NASA Technical Reports Server (NTRS)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  9. A low-power, high-throughput maximum-likelihood convolutional decoder chip for NASA's 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Mccallister, R. D.; Crawford, J. J.

    1981-01-01

    It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.

  10. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    PubMed

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  11. Factoring out nondecision time in choice reaction time data: Theory and implications.

    PubMed

    Verdonck, Stijn; Tuerlinckx, Francis

    2016-03-01

    Choice reaction time (RT) experiments are an invaluable tool in psychology and neuroscience. A common assumption is that the total choice response time is the sum of a decision and a nondecision part (time spent on perceptual and motor processes). While the decision part is typically modeled very carefully (commonly with diffusion models), a simple and ad hoc distribution (mostly uniform) is assumed for the nondecision component. Nevertheless, it has been shown that the misspecification of the nondecision time can severely distort the decision model parameter estimates. In this article, we propose an alternative approach to the estimation of choice RT models that elegantly bypasses the specification of the nondecision time distribution by means of an unconventional convolution of data and decision model distributions (hence called the D*M approach). Once the decision model parameters have been estimated, it is possible to compute a nonparametric estimate of the nondecision time distribution. The technique is tested on simulated data, and is shown to systematically remove traditional estimation bias related to misspecified nondecision time, even for a relatively small number of observations. The shape of the actual underlying nondecision time distribution can also be recovered. Next, the D*M approach is applied to a selection of existing diffusion model application articles. For all of these studies, substantial quantitative differences with the original analyses are found. For one study, these differences radically alter its final conclusions, underlining the importance of our approach. Additionally, we find that strongly right skewed nondecision time distributions are not at all uncommon. (c) 2016 APA, all rights reserved).

  12. Acral melanoma detection using a convolutional neural network for dermoscopy images.

    PubMed

    Yu, Chanki; Yang, Sejung; Kim, Wonoh; Jung, Jinwoong; Chung, Kee-Yang; Lee, Sang Wook; Oh, Byungho

    2018-01-01

    Acral melanoma is the most common type of melanoma in Asians, and usually results in a poor prognosis due to late diagnosis. We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. A total of 724 dermoscopy images comprising acral melanoma (350 images from 81 patients) and benign nevi (374 images from 194 patients), and confirmed by histopathological examination, were analyzed in this study. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist's and non-expert's evaluation. The accuracy (percentage of true positive and true negative from all images) of the convolutional neural network was 83.51% and 80.23%, which was higher than the non-expert's evaluation (67.84%, 62.71%) and close to that of the expert (81.08%, 81.64%). Moreover, the convolutional neural network showed area-under-the-curve values like 0.8, 0.84 and Youden's index like 0.6795, 0.6073, which were similar score with the expert. Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet.

  13. Accelerating Convolutional Sparse Coding for Curvilinear Structures Segmentation by Refining SCIRD-TS Filter Banks.

    PubMed

    Annunziata, Roberto; Trucco, Emanuele

    2016-11-01

    Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.

  14. Forces in wingwalls from thermal expansion of skewed semi-integral bridges.

    DOT National Transportation Integrated Search

    2010-11-01

    Jointless bridges, such as semi-integral and integral bridges, have become more popular in recent years because of their simplicity in the construction and the elimination of high costs related to joint maintenance. Prior research has shown that skew...

  15. An Interactive Graphics Program for Assistance in Learning Convolution.

    ERIC Educational Resources Information Center

    Frederick, Dean K.; Waag, Gary L.

    1980-01-01

    A program has been written for the interactive computer graphics facility at Rensselaer Polytechnic Institute that is designed to assist the user in learning the mathematical technique of convolving two functions. Because convolution can be represented graphically by a sequence of steps involving folding, shifting, multiplying, and integration, it…

  16. An investigation of safety problems at skewed rail-highway grade crossings.

    DOT National Transportation Integrated Search

    1984-01-01

    Skewed rail-highway grade crossings can be a safety problem because of the restrictions which the angle of crossing may place upon a motorist's ability to detect an oncoming train and because of the potential roadway hazard which the use of flangeway...

  17. Seismic rehabilitation of skewed and curved bridges using a new generation of buckling restrained braces : research brief.

    DOT National Transportation Integrated Search

    2016-12-01

    Damage to skewed and curved bridges during strong earthquakes is documented. This project investigates whether such damage could be mitigated by using buckling restrained braces. Nonlinear models show that using buckling restrained braces to mitigate...

  18. Field verification for the effectiveness of continuity diaphragms for skewed continuous P/C P/S concrete girder bridges.

    DOT National Transportation Integrated Search

    2009-10-01

    The research presented herein describes the field verification for the effectiveness of continuity diaphragms for : skewed continuous precast, prestressed, concrete girder bridges. The objectives of this research are (1) to perform : field load testi...

  19. Design study for multi-channel tape recorder system, volume 2

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Skew test data are presented on a tape recorder transport with a double capstan drive for a 100 KHz tone recorded on five tracks simultaneously. Phase detectors were used to measure the skew when the center channel was the 100 KHz reference.

  20. Seismic rehabilitation of skewed and curved bridges using a new generation of buckling restrained braces.

    DOT National Transportation Integrated Search

    2016-12-01

    The objective of this project is to find effective configurations for using buckling restrained braces (BRBs) in both skewed and curved bridges for reducing the effects of strong earthquakes. Verification is performed by numerical simulation using an...

  1. The Equilibrium Allele Frequency Distribution for a Population with Reproductive Skew

    PubMed Central

    Der, Ricky; Plotkin, Joshua B.

    2014-01-01

    We study the population genetics of two neutral alleles under reversible mutation in a model that features a skewed offspring distribution, called the Λ-Fleming–Viot process. We describe the shape of the equilibrium allele frequency distribution as a function of the model parameters. We show that the mutation rates can be uniquely identified from this equilibrium distribution, but the form of the offspring distribution cannot itself always be so identified. We introduce an estimator for the mutation rate that is consistent, independent of the form of reproductive skew. We also introduce a two-allele infinite-sites version of the Λ-Fleming–Viot process, and we use it to study how reproductive skew influences standing genetic diversity in a population. We derive asymptotic formulas for the expected number of segregating sites as a function of sample size and offspring distribution. We find that the Wright–Fisher model minimizes the equilibrium genetic diversity, for a given mutation rate and variance effective population size, compared to all other Λ-processes. PMID:24473932

  2. Measurement of the Width and Skewness of Elliptic Flow Fluctuations in PbPb Collisions at 5.02 TeV with CMS

    NASA Astrophysics Data System (ADS)

    Castle, James R.; CMS Collaboration

    2017-11-01

    Flow harmonic fluctuations are studied for PbPb collisions at √{sNN} = 5.02 TeV using the CMS detector at the LHC. Flow harmonic probability distributions p(v2) are obtained by unfolding smearing effects from observed azimuthal anisotropy distributions using particles of 0.3

  3. The structure of mode-locking regions of piecewise-linear continuous maps: II. Skew sawtooth maps

    NASA Astrophysics Data System (ADS)

    Simpson, D. J. W.

    2018-05-01

    In two-parameter bifurcation diagrams of piecewise-linear continuous maps on , mode-locking regions typically have points of zero width known as shrinking points. Near any shrinking point, but outside the associated mode-locking region, a significant proportion of parameter space can be usefully partitioned into a two-dimensional array of annular sectors. The purpose of this paper is to show that in these sectors the dynamics is well-approximated by a three-parameter family of skew sawtooth circle maps, where the relationship between the skew sawtooth maps and the N-dimensional map is fixed within each sector. The skew sawtooth maps are continuous, degree-one, and piecewise-linear, with two different slopes. They approximate the stable dynamics of the N-dimensional map with an error that goes to zero with the distance from the shrinking point. The results explain the complicated radial pattern of periodic, quasi-periodic, and chaotic dynamics that occurs near shrinking points.

  4. The skewed weak lensing likelihood: why biases arise, despite data and theory being sound

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim

    2018-07-01

    We derive the essentials of the skewed weak lensing likelihood via a simple hierarchical forward model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of Lambda cold dark matter. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from cosmic microwave background analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30 per cent of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.

  5. Distribution of mean Doppler shift, spectral width, and skewness of coherent 50-MHz auroral radar backscatter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watermann, J.; McNamara, A.G.; Sofko, G.J.

    Some 7,700 radio aurora spectra obtained from a six link 50-MHz CW radar network set up on the Canadian prairies were analyzed with respect to the distributions of mean Doppler shift, spectral width and skewness. A comparison with recently published SABRE results obtained at 153 MHz shows substantial differences in the distributions which are probably due to different experimental and geophysical conditions. The spectra are mostly broad with mean Doppler shifts close to zero (type II spectra). The typical groupings of type I and type III spectra are clearly identified. All types appear to be in general much more symmetricmore » than those recorded with SABRE, and the skewness is only weakly dependent on the sign of the mean Doppler shift. Its distribution peaks near zero and shows a weak positive correlation with the type II Doppler shifts while the mostly positive type I Doppler shifts are slightly negatively correlated with the skewness.« less

  6. The skewed weak lensing likelihood: why biases arise, despite data and theory being sound.

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim

    2018-04-01

    We derive the essentials of the skewed weak lensing likelihood via a simple Hierarchical Forward Model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of ΛCDM. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from CMB analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30% of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.

  7. Few Skewed Results from IOTA Interferometer YSO Disk Survey

    NASA Astrophysics Data System (ADS)

    Monnier, J. D.; Millan-Gabet, R.; Berger, J.-P.; Pedretti, E.; Traub, W.; Schloerb, F. P.

    2005-12-01

    The 3-telescope IOTA interferometer is capable of measuring closure phases for dozens of Herbig Ae/Be stars in the near-infrared. The closure phase unambiguously identifies deviations from centro-symmetry (i.e., skew) in the brightness distribution, at the scale of 4 milliarcseconds (sub-AU physical scales) for our work. Indeed, hot dust emission from the inner circumstellar accretion disk is expected to be skewed for (generic) flared disks viewed at intermediate inclination angles, as has been observed for LkHa 101. Surprisingly, we find very little evidence for skewed disk emission in our IOTA3 sample, setting strong constraints on the geometry of the inner disk. In particular, we rule out the currently-popular model of a VERTICAL hot inner wall of dust at the sublimation radius. Instead, our data is more consistent with a curved inner wall that bends away from the midplane as might be expected from the pressure-dependence of dust sublimation or limited absorption of stellar luminosity in the disk midplane by gas.

  8. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  9. Fundamental limits to frequency estimation: a comprehensive microscopic perspective

    NASA Astrophysics Data System (ADS)

    Haase, J. F.; Smirne, A.; Kołodyński, J.; Demkowicz-Dobrzański, R.; Huelga, S. F.

    2018-05-01

    We consider a metrology scenario in which qubit-like probes are used to sense an external field that affects their energy splitting in a linear fashion. Following the frequency estimation approach in which one optimizes the state and sensing time of the probes to maximize the sensitivity, we provide a systematic study of the attainable precision under the impact of noise originating from independent bosonic baths. Specifically, we invoke an explicit microscopic derivation of the probe dynamics using the spin-boson model with weak coupling of arbitrary geometry. We clarify how the secular approximation leads to a phase-covariant (PC) dynamics, where the noise terms commute with the field Hamiltonian, while the inclusion of non-secular contributions breaks the PC. Moreover, unless one restricts to a particular (i.e., Ohmic) spectral density of the bath modes, the noise terms may contain relevant information about the frequency to be estimated. Thus, by considering general evolutions of a single probe, we study regimes in which these two effects have a non-negligible impact on the achievable precision. We then consider baths of Ohmic spectral density yet fully accounting for the lack of PC, in order to characterize the ultimate attainable scaling of precision when N probes are used in parallel. Crucially, we show that beyond the semigroup (Lindbladian) regime the Zeno limit imposing the 1/N 3/2 scaling of the mean squared error, recently derived assuming PC, generalises to any dynamics of the probes, unless the latter are coupled to the baths in the direction perfectly transversal to the frequency encoding—when a novel scaling of 1/N 7/4 arises. As our microscopic approach covers all classes of dissipative dynamics, from semigroup to non-Markovian ones (each of them potentially non-phase-covariant), it provides an exhaustive picture, in which all the different asymptotic scalings of precision naturally emerge.

  10. Near-field shock formation in noise propagation from a high-power jet aircraft.

    PubMed

    Gee, Kent L; Neilsen, Tracianne B; Downing, J Micah; James, Michael M; McKinley, Richard L; McKinley, Robert C; Wall, Alan T

    2013-02-01

    Noise measurements near the F-35A Joint Strike Fighter at military power are analyzed via spatial maps of overall and band pressure levels and skewness. Relative constancy of the pressure waveform skewness reveals that waveform asymmetry, characteristic of supersonic jets, is a source phenomenon originating farther upstream than the maximum overall level. Conversely, growth of the skewness of the time derivative with distance indicates that acoustic shocks largely form through the course of near-field propagation and are not generated explicitly by a source mechanism. These results potentially counter previous arguments that jet "crackle" is a source phenomenon.

  11. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    NASA Astrophysics Data System (ADS)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-03-01

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  12. Rock images classification by using deep convolution neural network

    NASA Astrophysics Data System (ADS)

    Cheng, Guojian; Guo, Wenhui

    2017-08-01

    Granularity analysis is one of the most essential issues in authenticate under microscope. To improve the efficiency and accuracy of traditional manual work, an convolutional neural network based method is proposed for granularity analysis from thin section image, which chooses and extracts features from image samples while build classifier to recognize granularity of input image samples. 4800 samples from Ordos basin are used for experiments under colour spaces of HSV, YCbCr and RGB respectively. On the test dataset, the correct rate in RGB colour space is 98.5%, and it is believable in HSV and YCbCr colour space. The results show that the convolution neural network can classify the rock images with high reliability.

  13. Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akabani, G.; Hawkins, W.G.; Eckblade, M.B.

    1999-01-01

    The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transformmore » (CHT) algorithm.« less

  14. Convolute laminations — a theoretical analysis: example of a Pennsylvanian sandstone

    NASA Astrophysics Data System (ADS)

    Visher, Glenn S.; Cunningham, Russ D.

    1981-03-01

    Data from an outcropping laminated interval were collected and analyzed to test the applicability of a theoretical model describing instability of layered systems. Rayleigh—Taylor wave perturbations result at the interface between fluids of contrasting density, viscosity, and thickness. In the special case where reverse density and viscosity interlaminations are developed, the deformation response produces a single wave with predictable amplitudes, wavelengths, and amplification rates. Physical measurements from both the outcropping section and modern sediments suggest the usefulness of the model for the interpretation of convolute laminations. Internal characteristics of the stratigraphic interval, and the developmental sequence of convoluted beds, are used to document the developmental history of these structures.

  15. Detecting of foreign object debris on airfield pavement using convolution neural network

    NASA Astrophysics Data System (ADS)

    Cao, Xiaoguang; Gu, Yufeng; Bai, Xiangzhi

    2017-11-01

    It is of great practical significance to detect foreign object debris (FOD) timely and accurately on the airfield pavement, because the FOD is a fatal threaten for runway safety in airport. In this paper, a new FOD detection framework based on Single Shot MultiBox Detector (SSD) is proposed. Two strategies include making the detection network lighter and using dilated convolution, which are proposed to better solve the FOD detection problem. The advantages mainly include: (i) the network structure becomes lighter to speed up detection task and enhance detection accuracy; (ii) dilated convolution is applied in network structure to handle smaller FOD. Thus, we get a faster and more accurate detection system.

  16. Kinship and Incest Avoidance Drive Patterns of Reproductive Skew in Cooperatively Breeding Birds.

    PubMed

    Riehl, Christina

    2017-12-01

    Social animals vary in how reproduction is divided among group members, ranging from monopolization by a dominant pair (high skew) to equal sharing by cobreeders (low skew). Despite many theoretical models, the ecological and life-history factors that generate this variation are still debated. Here I analyze data from 83 species of cooperatively breeding birds, finding that kinship within the breeding group is a powerful predictor of reproductive sharing across species. Societies composed of nuclear families have significantly higher skew than those that contain unrelated members, a pattern that holds for both multimale and multifemale groups. Within-species studies confirm this, showing that unrelated subordinates of both sexes are more likely to breed than related subordinates are. Crucially, subordinates in cooperative groups are more likely to breed if they are unrelated to the opposite-sex dominant, whereas relatedness to the same-sex dominant has no effect. This suggests that incest avoidance, rather than suppression by dominant breeders, may be an important proximate mechanism limiting reproduction by subordinates. Overall, these results support the ultimate evolutionary logic behind concessions models of skew-namely, that related subordinates gain indirect fitness benefits from helping at the nests of kin, so a lower direct reproductive share is required for selection to favor helping over dispersal-but not the proximate mechanism of dominant control assumed by these models.

  17. How do reproductive skew and founder group size affect genetic diversity in reintroduced populations?

    PubMed

    Miller, K A; Nelson, N J; Smith, H G; Moore, J A

    2009-09-01

    Reduced genetic diversity can result in short-term decreases in fitness and reduced adaptive potential, which may lead to an increased extinction risk. Therefore, maintaining genetic variation is important for the short- and long-term success of reintroduced populations. Here, we evaluate how founder group size and variance in male reproductive success influence the long-term maintenance of genetic diversity after reintroduction. We used microsatellite data to quantify the loss of heterozygosity and allelic diversity in the founder groups from three reintroductions of tuatara (Sphenodon), the sole living representatives of the reptilian order Rhynchocephalia. We then estimated the maintenance of genetic diversity over 400 years (approximately 10 generations) using population viability analyses. Reproduction of tuatara is highly skewed, with as few as 30% of males mating across years. Predicted losses of heterozygosity over 10 generations were low (1-14%), and populations founded with more animals retained a greater proportion of the heterozygosity and allelic diversity of their source populations and founder groups. Greater male reproductive skew led to greater predicted losses of genetic diversity over 10 generations, but only accelerated the loss of genetic diversity at small population size (<250 animals). A reduction in reproductive skew at low density may facilitate the maintenance of genetic diversity in small reintroduced populations. If reproductive skew is high and density-independent, larger founder groups could be released to achieve genetic goals for management.

  18. Impaired imprinted X chromosome inactivation is responsible for the skewed sex ratio following in vitro fertilization

    PubMed Central

    Tan, Kun; An, Lei; Miao, Kai; Ren, Likun; Hou, Zhuocheng; Tao, Li; Zhang, Zhenni; Wang, Xiaodong; Xia, Wei; Liu, Jinghao; Wang, Zhuqing; Xi, Guangyin; Gao, Shuai; Sui, Linlin; Zhu, De-Sheng; Wang, Shumin; Wu, Zhonghong; Bach, Ingolf; Chen, Dong-bao; Tian, Jianhui

    2016-01-01

    Dynamic epigenetic reprogramming occurs during normal embryonic development at the preimplantation stage. Erroneous epigenetic modifications due to environmental perturbations such as manipulation and culture of embryos during in vitro fertilization (IVF) are linked to various short- or long-term consequences. Among these, the skewed sex ratio, an indicator of reproductive hazards, was reported in bovine and porcine embryos and even human IVF newborns. However, since the first case of sex skewing reported in 1991, the underlying mechanisms remain unclear. We reported herein that sex ratio is skewed in mouse IVF offspring, and this was a result of female-biased peri-implantation developmental defects that were originated from impaired imprinted X chromosome inactivation (iXCI) through reduced ring finger protein 12 (Rnf12)/X-inactive specific transcript (Xist) expression. Compensation of impaired iXCI by overexpression of Rnf12 to up-regulate Xist significantly rescued female-biased developmental defects and corrected sex ratio in IVF offspring. Moreover, supplementation of an epigenetic modulator retinoic acid in embryo culture medium up-regulated Rnf12/Xist expression, improved iXCI, and successfully redeemed the skewed sex ratio to nearly 50% in mouse IVF offspring. Thus, our data show that iXCI is one of the major epigenetic barriers for the developmental competence of female embryos during preimplantation stage, and targeting erroneous epigenetic modifications may provide a potential approach for preventing IVF-associated complications. PMID:26951653

  19. T helper cell 2 immune skewing in pregnancy/early life: chemical exposure and the development of atopic disease and allergy.

    PubMed

    McFadden, J P; Thyssen, J P; Basketter, D A; Puangpet, P; Kimber, I

    2015-03-01

    During the last 50 years there has been a significant increase in Western societies of atopic disease and associated allergy. The balance between functional subpopulations of T helper cells (Th) determines the quality of the immune response provoked by antigen. One such subpopulation - Th2 cells - is associated with the production of IgE antibody and atopic allergy, whereas, Th1 cells antagonize IgE responses and the development of allergic disease. In seeking to provide a mechanistic basis for this increased prevalence of allergic disease, one proposal has been the 'hygiene hypothesis', which argues that in Westernized societies reduced exposure during early childhood to pathogenic microorganisms favours the development of atopic allergy. Pregnancy is normally associated with Th2 skewing, which persists for some months in the neonate before Th1/Th2 realignment occurs. In this review, we consider the immunophysiology of Th2 immune skewing during pregnancy. In particular, we explore the possibility that altered and increased patterns of exposure to certain chemicals have served to accentuate this normal Th2 skewing and therefore further promote the persistence of a Th2 bias in neonates. Furthermore, we propose that the more marked Th2 skewing observed in first pregnancy may, at least in part, explain the higher prevalence of atopic disease and allergy in the first born. © 2014 British Association of Dermatologists.

  20. Coding performance of the Probe-Orbiter-Earth communication link

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Dolinar, S.; Pollara, F.

    1993-01-01

    The coding performance of the Probe-Orbiter-Earth communication link is analyzed and compared for several cases. It is assumed that the coding system consists of a convolutional code at the Probe, a quantizer and another convolutional code at the Orbiter, and two cascaded Viterbi decoders or a combined decoder on the ground.

  1. Identification and Classification of Orthogonal Frequency Division Multiple Access (OFDMA) Signals Used in Next Generation Wireless Systems

    DTIC Science & Technology

    2012-03-01

    advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating

  2. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  3. Lunar Circular Structure Classification from Chang 'e 2 High Resolution Lunar Images with Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Zeng, X. G.; Liu, J. J.; Zuo, W.; Chen, W. L.; Liu, Y. X.

    2018-04-01

    Circular structures are widely distributed around the lunar surface. The most typical of them could be lunar impact crater, lunar dome, et.al. In this approach, we are trying to use the Convolutional Neural Network to classify the lunar circular structures from the lunar images.

  4. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification

    PubMed Central

    Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128

  5. A pre-trained convolutional neural network based method for thyroid nodule diagnosis.

    PubMed

    Ma, Jinlian; Wu, Fa; Zhu, Jiang; Xu, Dong; Kong, Dexing

    2017-01-01

    In ultrasound images, most thyroid nodules are in heterogeneous appearances with various internal components and also have vague boundaries, so it is difficult for physicians to discriminate malignant thyroid nodules from benign ones. In this study, we propose a hybrid method for thyroid nodule diagnosis, which is a fusion of two pre-trained convolutional neural networks (CNNs) with different convolutional layers and fully-connected layers. Firstly, the two networks pre-trained with ImageNet database are separately trained. Secondly, we fuse feature maps learned by trained convolutional filters, pooling and normalization operations of the two CNNs. Finally, with the fused feature maps, a softmax classifier is used to diagnose thyroid nodules. The proposed method is validated on 15,000 ultrasound images collected from two local hospitals. Experiment results show that the proposed CNN based methods can accurately and effectively diagnose thyroid nodules. In addition, the fusion of the two CNN based models lead to significant performance improvement, with an accuracy of 83.02%±0.72%. These demonstrate the potential clinical applications of this method. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Enhancement of digital radiography image quality using a convolutional neural network.

    PubMed

    Sun, Yuewen; Li, Litao; Cong, Peng; Wang, Zhentao; Guo, Xiaojing

    2017-01-01

    Digital radiography system is widely used for noninvasive security check and medical imaging examination. However, the system has a limitation of lower image quality in spatial resolution and signal to noise ratio. In this study, we explored whether the image quality acquired by the digital radiography system can be improved with a modified convolutional neural network to generate high-resolution images with reduced noise from the original low-quality images. The experiment evaluated on a test dataset, which contains 5 X-ray images, showed that the proposed method outperformed the traditional methods (i.e., bicubic interpolation and 3D block-matching approach) as measured by peak signal to noise ratio (PSNR) about 1.3 dB while kept highly efficient processing time within one second. Experimental results demonstrated that a residual to residual (RTR) convolutional neural network remarkably improved the image quality of object structural details by increasing the image resolution and reducing image noise. Thus, this study indicated that applying this RTR convolutional neural network system was useful to improve image quality acquired by the digital radiography system.

  7. Alcoholism Detection by Data Augmentation and Convolutional Neural Network with Stochastic Pooling.

    PubMed

    Wang, Shui-Hua; Lv, Yi-Ding; Sui, Yuxiu; Liu, Shuai; Wang, Su-Jing; Zhang, Yu-Dong

    2017-11-17

    Alcohol use disorder (AUD) is an important brain disease. It alters the brain structure. Recently, scholars tend to use computer vision based techniques to detect AUD. We collected 235 subjects, 114 alcoholic and 121 non-alcoholic. Among the 235 image, 100 images were used as training set, and data augmentation method was used. The rest 135 images were used as test set. Further, we chose the latest powerful technique-convolutional neural network (CNN) based on convolutional layer, rectified linear unit layer, pooling layer, fully connected layer, and softmax layer. We also compared three different pooling techniques: max pooling, average pooling, and stochastic pooling. The results showed that our method achieved a sensitivity of 96.88%, a specificity of 97.18%, and an accuracy of 97.04%. Our method was better than three state-of-the-art approaches. Besides, stochastic pooling performed better than other max pooling and average pooling. We validated CNN with five convolution layers and two fully connected layers performed the best. The GPU yielded a 149× acceleration in training and a 166× acceleration in test, compared to CPU.

  8. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.

    PubMed

    Pang, Shan; Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.

  9. Lp-stability (1 less than or equal to p less than or equal to infinity) of multivariable nonlinear time-varying feedback systems that are open-loop unstable. [noting unstable convolution subsystem forward control and time varying nonlinear feedback

    NASA Technical Reports Server (NTRS)

    Callier, F. M.; Desoer, C. A.

    1973-01-01

    A class of multivariable, nonlinear time-varying feedback systems with an unstable convolution subsystem as feedforward and a time-varying nonlinear gain as feedback was considered. The impulse response of the convolution subsystem is the sum of a finite number of increasing exponentials multiplied by nonnegative powers of the time t, a term that is absolutely integrable and an infinite series of delayed impulses. The main result is a theorem. It essentially states that if the unstable convolution subsystem can be stabilized by a constant feedback gain F and if incremental gain of the difference between the nonlinear gain function and F is sufficiently small, then the nonlinear system is L(p)-stable for any p between one and infinity. Furthermore, the solutions of the nonlinear system depend continuously on the inputs in any L(p)-norm. The fixed point theorem is crucial in deriving the above theorem.

  10. Refinement of Scoring Procedures for the Basic Attributes Test (BAT) Battery

    DTIC Science & Technology

    1993-03-01

    see Carretta, 1991). Research on the BAT summary scores has shown that some of them (a) are significantly positively skewed and platykurtic , (b) contain...for positively skewed and platykurtic data distributions, and those that were applied here to the BAT data, are the square-root and natural logarithm

  11. Electronic skewing circuit monitors exact position of object underwater

    NASA Technical Reports Server (NTRS)

    Roller, R.; Yaroshuk, N.

    1967-01-01

    Linear Variable Differential Transformer /LVDT/ electronic skewing circuit guides a long cylindrical capsule underwater into a larger tube so that it does not contact the tube wall. This device detects movement of the capsule from a reference point and provides a continuous signal that is monitored on an oscilloscope.

  12. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    PubMed

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  13. Producing data-based sensitivity kernels from convolution and correlation in exploration geophysics.

    NASA Astrophysics Data System (ADS)

    Chmiel, M. J.; Roux, P.; Herrmann, P.; Rondeleux, B.

    2016-12-01

    Many studies have shown that seismic interferometry can be used to estimate surface wave arrivals by correlation of seismic signals recorded at a pair of locations. In the case of ambient noise sources, the convergence towards the surface wave Green's functions is obtained with the criterion of equipartitioned energy. However, seismic acquisition with active, controlled sources gives more possibilities when it comes to interferometry. The use of controlled sources makes it possible to recover the surface wave Green's function between two points using either correlation or convolution. We investigate the convolutional and correlational approaches using land active-seismic data from exploration geophysics. The data were recorded on 10,710 vertical receivers using 51,808 sources (seismic vibrator trucks). The sources spacing is the same in both X and Y directions (30 m) which is known as a "carpet shooting". The receivers are placed in parallel lines with a spacing 150 m in the X direction and 30 m in the Y direction. Invoking spatial reciprocity between sources and receivers, correlation and convolution functions can thus be constructed between either pairs of receivers or pairs of sources. Benefiting from the dense acquisition, we extract sensitivity kernels from correlation and convolution measurements of the seismic data. These sensitivity kernels are subsequently used to produce phase-velocity dispersion curves between two points and to separate the higher mode from the fundamental mode for surface waves. Potential application to surface wave cancellation is also envisaged.

  14. Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method.

    PubMed

    Li, Haisen S; Chetty, Indrin J; Solberg, Timothy D

    2008-05-01

    The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method ("average-based convolution"), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (> 30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible.

  15. SU-E-T-371: Evaluating the Convolution Algorithm of a Commercially Available Radiosurgery Irradiator Using a Novel Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cates, J; Drzymala, R

    2015-06-15

    Purpose: The purpose of this study was to develop and use a novel phantom to evaluate the accuracy and usefulness of the Leskell Gamma Plan convolution-based dose calculation algorithm compared with the current TMR10 algorithm. Methods: A novel phantom was designed to fit the Leskell Gamma Knife G Frame which could accommodate various materials in the form of one inch diameter, cylindrical plugs. The plugs were split axially to allow EBT2 film placement. Film measurements were made during two experiments. The first utilized plans generated on a homogeneous acrylic phantom setup using the TMR10 algorithm, with various materials inserted intomore » the phantom during film irradiation to assess the effect on delivered dose due to unplanned heterogeneities upstream in the beam path. The second experiment utilized plans made on CT scans of different heterogeneous setups, with one plan using the TMR10 dose calculation algorithm and the second using the convolution-based algorithm. Materials used to introduce heterogeneities included air, LDPE, polystyrene, Delrin, Teflon, and aluminum. Results: The data shows that, as would be expected, having heterogeneities in the beam path does induce dose delivery error when using the TMR10 algorithm, with the largest errors being due to the heterogeneities with electron densities most different from that of water, i.e. air, Teflon, and aluminum. Additionally, the Convolution algorithm did account for the heterogeneous material and provided a more accurate predicted dose, in extreme cases up to a 7–12% improvement over the TMR10 algorithm. The convolution algorithm expected dose was accurate to within 3% in all cases. Conclusion: This study proves that the convolution algorithm is an improvement over the TMR10 algorithm when heterogeneities are present. More work is needed to determine what the heterogeneity size/volume limits are where this improvement exists, and in what clinical and/or research cases this would be relevant.« less

  16. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1984-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589

  17. Quantum speed limits in open system dynamics.

    PubMed

    del Campo, A; Egusquiza, I L; Plenio, M B; Huelga, S F

    2013-02-01

    Bounds to the speed of evolution of a quantum system are of fundamental interest in quantum metrology, quantum chemical dynamics, and quantum computation. We derive a time-energy uncertainty relation for open quantum systems undergoing a general, completely positive, and trace preserving evolution which provides a bound to the quantum speed limit. When the evolution is of the Lindblad form, the bound is analogous to the Mandelstam-Tamm relation which applies in the unitary case, with the role of the Hamiltonian being played by the adjoint of the generator of the dynamical semigroup. The utility of the new bound is exemplified in different scenarios, ranging from the estimation of the passage time to the determination of precision limits for quantum metrology in the presence of dephasing noise.

  18. The Fundamental Solution of the Linearized Navier Stokes Equations for Spinning Bodies in Three Spatial Dimensions Time Dependent Case

    NASA Astrophysics Data System (ADS)

    Thomann, Enrique A.; Guenther, Ronald B.

    2006-02-01

    Explicit formulae for the fundamental solution of the linearized time dependent Navier Stokes equations in three spatial dimensions are obtained. The linear equations considered in this paper include those used to model rigid bodies that are translating and rotating at a constant velocity. Estimates extending those obtained by Solonnikov in [23] for the fundamental solution of the time dependent Stokes equations, corresponding to zero translational and angular velocity, are established. Existence and uniqueness of solutions of these linearized problems is obtained for a class of functions that includes the classical Lebesgue spaces L p (R 3), 1 < p < ∞. Finally, the asymptotic behavior and semigroup properties of the fundamental solution are established.

  19. Gaussian geometric discord in terms of Hellinger distance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suciu, Serban, E-mail: serban.suciu@theory.nipne.ro; Isar, Aurelian

    2015-12-07

    In the framework of the theory of open systems based on completely positive quantum dynamical semigroups, we address the quantification of general non-classical correlations in Gaussian states of continuous variable systems from a geometric perspective. We give a description of the Gaussian geometric discord by using the Hellinger distance as a measure for quantum correlations between two non-interacting non-resonant bosonic modes embedded in a thermal environment. We evaluate the Gaussian geometric discord by taking two-mode squeezed thermal states as initial states of the system and show that it has finite values between 0 and 1 and that it decays asymptoticallymore » to zero in time under the effect of the thermal bath.« less

  20. Micromagnetic recording model of writer geometry effects at skew

    NASA Astrophysics Data System (ADS)

    Plumer, M. L.; Bozeman, S.; van Ek, J.; Michel, R. P.

    2006-04-01

    The effects of the pole-tip geometry at the air-bearing surface on perpendicular recording at a skew angle are examined through modeling and spin-stand test data. Head fields generated by the finite element method were used to record transitions within our previously described micromagnetic recording model. Write-field contours for a variety of square, rectangular, and trapezoidal pole shapes were evaluated to determine the impact of geometry on field contours. Comparing results for recorded track width, transition width, and media signal to noise ratio at 0° and 15° skew demonstrate the benefits of trapezoidal and reduced aspect-ratio pole shapes. Consistency between these modeled results and test data is demonstrated.

  1. On the Yakhot-Orszag renormalization group method for deriving turbulence statistics and models

    NASA Technical Reports Server (NTRS)

    Smith, L. M.; Reynolds, W. C.

    1992-01-01

    An independent, comprehensive, critical review of the 'renormalization group' (RNG) theory of turbulence developed by Yakhot and Orszag (1986) is provided. Their basic theory for the Navier-Stokes equations is confirmed, and approximations in the scale removal procedure are discussed. The YO derivations of the velocity-derivative skewness and the transport equation for the energy dissipation rate are examined. An algebraic error in the derivation of the skewness is corrected. The corrected RNG skewness value of -0.59 is in agreement with experiments at moderate Reynolds numbers. Several problems are identified in the derivation of the energy dissipation rate equations which suggest that the derivation should be reformulated.

  2. VLSI single-chip (255,223) Reed-Solomon encoder with interleaver

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor)

    1990-01-01

    The invention relates to a concatenated Reed-Solomon/convolutional encoding system consisting of a Reed-Solomon outer code and a convolutional inner code for downlink telemetry in space missions, and more particularly to a Reed-Solomon encoder with programmable interleaving of the information symbols and code correction symbols to combat error bursts in the Viterbi decoder.

  3. Deep feature representation with stacked sparse auto-encoder and convolutional neural network for hyperspectral imaging-based detection of cucumber defects

    USDA-ARS?s Scientific Manuscript database

    It is challenging to achieve rapid and accurate processing of large amounts of hyperspectral image data. This research was aimed to develop a novel classification method by employing deep feature representation with the stacked sparse auto-encoder (SSAE) and the SSAE combined with convolutional neur...

  4. A Real-Time Convolution Algorithm and Architecture with Applications in SAR Processing

    DTIC Science & Technology

    1993-10-01

    multidimensional lOnnulation of the DFT and convolution. IEEE-ASSP, ASSP-25(3):239-242, June 1977. [6] P. Hoogenboom et al. Definition study PHARUS: final...algorithms and Ihe role of lhe tensor product. IEEE-ASSP, ASSP-40( 1 2):292 J-2930, December 1992. 181 P. Hoogenboom , P. Snoeij. P.J. Koomen. and H

  5. Two-level convolution formula for nuclear structure function

    NASA Astrophysics Data System (ADS)

    Ma, Boqiang

    1990-05-01

    A two-level convolution formula for the nuclear structure function is derived in considering the nucleus as a composite system of baryon-mesons which are also composite systems of quark-gluons again. The results show that the European Muon Colaboration effect can not be explained by the nuclear effects as nucleon Fermi motion and nuclear binding contributions.

  6. DSN telemetry system performance with convolutionally code data

    NASA Technical Reports Server (NTRS)

    Mulhall, B. D. L.; Benjauthrit, B.; Greenhall, C. A.; Kuma, D. M.; Lam, J. K.; Wong, J. S.; Urech, J.; Vit, L. D.

    1975-01-01

    The results obtained to date and the plans for future experiments for the DSN telemetry system were presented. The performance of the DSN telemetry system in decoding convolutionally coded data by both sequential and maximum likelihood techniques is being determined by testing at various deep space stations. The evaluation of performance models is also an objective of this activity.

  7. Two-dimensional convolute integers for analytical instrumentation

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1982-01-01

    As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.

  8. A convolutional neural network neutrino event classifier

    DOE PAGES

    Aurisano, A.; Radovic, A.; Rocco, D.; ...

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  9. Airplane detection in remote sensing images using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Ouyang, Chao; Chen, Zhong; Zhang, Feng; Zhang, Yifei

    2018-03-01

    Airplane detection in remote sensing images remains a challenging problem and has also been taking a great interest to researchers. In this paper we propose an effective method to detect airplanes in remote sensing images using convolutional neural networks. Deep learning methods show greater advantages than the traditional methods with the rise of deep neural networks in target detection, and we give an explanation why this happens. To improve the performance on detection of airplane, we combine a region proposal algorithm with convolutional neural networks. And in the training phase, we divide the background into multi classes rather than one class, which can reduce false alarms. Our experimental results show that the proposed method is effective and robust in detecting airplane.

  10. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    DOE PAGES

    Acciarri, R.; Adams, C.; An, R.; ...

    2017-03-14

    Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPCmore » at or near ground level.« less

  11. Video-based convolutional neural networks for activity recognition from robot-centric videos

    NASA Astrophysics Data System (ADS)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  12. Gas Classification Using Deep Convolutional Neural Networks.

    PubMed

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-08

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).

  13. Gas Classification Using Deep Convolutional Neural Networks

    PubMed Central

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-01

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723

  14. Applications of deep convolutional neural networks to digitized natural history collections.

    PubMed

    Schuettpelz, Eric; Frandsen, Paul B; Dikow, Rebecca B; Brown, Abel; Orli, Sylvia; Peters, Melinda; Metallo, Adam; Funk, Vicki A; Dorr, Laurence J

    2017-01-01

    Natural history collections contain data that are critical for many scientific endeavors. Recent efforts in mass digitization are generating large datasets from these collections that can provide unprecedented insight. Here, we present examples of how deep convolutional neural networks can be applied in analyses of imaged herbarium specimens. We first demonstrate that a convolutional neural network can detect mercury-stained specimens across a collection with 90% accuracy. We then show that such a network can correctly distinguish two morphologically similar plant families 96% of the time. Discarding the most challenging specimen images increases accuracy to 94% and 99%, respectively. These results highlight the importance of mass digitization and deep learning approaches and reveal how they can together deliver powerful new investigative tools.

  15. A convolutional neural network neutrino event classifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aurisano, A.; Radovic, A.; Rocco, D.

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  16. Clinicopathologic correlations in Alibert-type mycosis fungoides.

    PubMed

    Eng, A M; Blekys, I; Worobec, S M

    1981-06-01

    Five cases of mycosis fungoides of the Alibert type were studied by taking multiple biopsy specimens at different stages of the disease. Large hyperchromatic, slightly irregular mononuclear cells are the most frequent cells. Ultrastructurally, the cells were only slightly convoluted, had prominent heterochromatin banding at the nuclear membrane, and unremarkable cytoplasmic organelles. Highly convoluted cerebriform nucleated cells were few. Large regular vesicular histiocytes were prominent in the early stages. Ultrastructurally, the cells showed evenly distributed euchromatin. Epidermotrophism was equally as important as Pautrier's abscess as a hallmark of the disease. Stereologic techniques comparing the infiltrate with regard to size and convolution of cells in all stages of mycosis fungoides with infiltrates seen in a variety of benign dermatoses showed no statistically significant differences.

  17. Deep Learning with Hierarchical Convolutional Factor Analysis

    PubMed Central

    Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence

    2013-01-01

    Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342

  18. On Some Confidence Intervals for Estimating the Mean of a Skewed Population

    ERIC Educational Resources Information Center

    Shi, W.; Kibria, B. M. Golam

    2007-01-01

    A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…

  19. A Better Lemon Squeezer? Maximum-Likelihood Regression with Beta-Distributed Dependent Variables

    ERIC Educational Resources Information Center

    Smithson, Michael; Verkuilen, Jay

    2006-01-01

    Uncorrectable skew and heteroscedasticity are among the "lemons" of psychological data, yet many important variables naturally exhibit these properties. For scales with a lower and upper bound, a suitable candidate for models is the beta distribution, which is very flexible and models skew quite well. The authors present…

  20. Journey to Centers in the Core

    ERIC Educational Resources Information Center

    Groth, Randall E.; Kent, Kristen D.; Hitch, Ebony D.

    2015-01-01

    Considerable discrepancies between the mean and median often occur in data sets that are skewed left, skewed right, or have other unusual features. In such cases, it is important to analyze the data and context carefully to decide how best to describe centers of distributions. The importance of this type of statistical thinking is acknowledged in…

  1. Caste load and the evolution of reproductive skew.

    PubMed

    Holman, Luke

    2014-01-01

    Reproductive skew theory seeks to explain how reproduction is divided among group members in animal societies. Existing theory is framed almost entirely in terms of selection, though nonadaptive processes must also play some role in the evolution of reproductive skew. Here I propose that a genetic correlation between helper fecundity and breeder fecundity may frequently constrain the evolution of reproductive skew. This constraint is part of a wider phenomenon that I term "caste load," which is defined as the decline in mean fitness caused by caste-specific selection pressures, that is, differential selection on breeding and nonbreeding individuals. I elaborate the caste load hypothesis using quantitative and population genetic arguments and individual-based simulations. Although selection can sometimes erode genetic correlations and resolve caste load, this may be constrained when mutations have similar pleiotropic effects on breeder and helper traits. I document evidence for caste load, identify putative genomic adaptations to it, and suggest future research directions. The models highlight the value of considering adaptation within the boundaries imposed by genetic architecture and incidentally reaffirm that monogamy promotes the evolutionary transition to eusociality.

  2. Jet crackle: skewness transport budget and a mechanistic source model

    NASA Astrophysics Data System (ADS)

    Buchta, David; Freund, Jonathan

    2016-11-01

    The sound from high-speed (supersonic) jets, such as on military aircraft, is distinctly different than that from lower-speed jets, such as on commercial airliners. Atop the already loud noise, a higher speed adds an intense, fricative, and intermittent character. The observed pressure wave patterns have strong peaks which are followed by relatively long shallows; notably, their pressure skewness is Sk >= 0 . 4 . Direct numerical simulation of free-shear-flow turbulence show that these skewed pressure waves occur immediately adjacent to the turbulence source for M >= 2 . 5 . Additionally, the near-field waves are seen to intersect and nonlinearly merge with other waves. Statistical analysis of terms in a pressure skewness transport equation show that starting just beyond δ99 the nonlinear wave mechanics that add to Sk are balanced by damping molecular effects, consistent with this aspect of the sound arising in the source region. A gas dynamics description is developed that neglects rotational turbulence dynamics and yet reproduces the key crackle features. At its core, this mechanism shows simply that nonlinear compressive effects lead directly to stronger compressions than expansions and thus Sk > 0 .

  3. Comparative study of reproductive skew and pair-bond stability using genealogies from 80 small-scale human societies.

    PubMed

    Ellsworth, Ryan M; Shenk, Mary K; Bailey, Drew H; Walker, Robert S

    2016-05-01

    Genealogies contain information on the prevalence of different sibling types that result from past reproductive behavior. Full sibling sets stem from stable monogamy, paternal half siblings primarily indicate male reproductive skew, and maternal half siblings reflect unstable pair bonds. Full and half sibling types are calculated for a total of 61,181 siblings from published genealogies for 80 small-scale societies, including foragers, horticulturalists, agriculturalists, and pastoralists from around the world. Most siblings are full (61%) followed by paternal half siblings (27%) and maternal half siblings (13%). Paternal half siblings are positively correlated with more polygynous marriages, higher at low latitudes, and slightly higher in nonforagers, Maternal half sibling fractions are slightly higher at low latitudes but do not vary with subsistence. Partible paternity societies in Amazonia have more paternal half siblings indicating higher male reproductive skew. Sibling counts from genealogies provide a convenient method to simultaneously investigate the reproductive skew and pair-bond stability dimensions of human mating systems cross-culturally. Am. J. Hum. Biol. 28:335-342, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  4. Screening Immunomodulators To Skew the Antigen-Specific Autoimmune Response.

    PubMed

    Northrup, Laura; Sullivan, Bradley P; Hartwell, Brittany L; Garza, Aaron; Berkland, Cory

    2017-01-03

    Current therapies to treat autoimmune diseases often result in side effects such as nonspecific immunosuppression. Therapies that can induce antigen-specific immune tolerance provide an opportunity to reverse autoimmunity and mitigate the risks associated with global immunosuppression. In an effort to induce antigen-specific immune tolerance, co-administration of immunomodulators with autoantigens has been investigated in an effort to reprogram autoimmunity. To date, identifying immunomodulators that may skew the antigen-specific immune response has been ad hoc at best. To address this need, we utilized splenocytes obtained from mice with experimental autoimmune encephalomyelitis (EAE) in order to determine if certain immunomodulators may induce markers of immune tolerance following antigen rechallenge. Of the immunomodulatory compounds investigated, only dexamethasone modified the antigen-specific immune response by skewing the cytokine response and decreasing T-cell populations at a concentration corresponding to a relevant in vivo dose. Thus, antigen-educated EAE splenocytes provide an ex vivo screen for investigating compounds capable of skewing the antigen-specific immune response, and this approach could be extrapolated to antigen-educated cells from other diseases or human tissues.

  5. Skewed task conflicts in teams: What happens when a few members see more conflict than the rest?

    PubMed

    Sinha, Ruchi; Janardhanan, Niranjan S; Greer, Lindred L; Conlon, Donald E; Edwards, Jeffery R

    2016-07-01

    Task conflict has been the subject of a long-standing debate in the literature-when does task conflict help or hurt team performance? We propose that this debate can be resolved by taking a more precise view of how task conflicts are perceived in teams. Specifically, we propose that in teams, when a few team members perceive a high level of task disagreement while a majority of others perceive low levels of task disagreement-that is, there is positively skewed task conflict, task conflict is most likely to live up to its purported benefits for team performance. In our first study of student teams engaged in a business decision game, we find support for the positive relationship between skewed task conflict and team performance. In our second field study of teams in a financial corporation, we find that the relationship between positively skewed task conflict and supervisor ratings of team performance is mediated by reflective communication within the team. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Kurtosis, skewness, and non-Gaussian cosmological density perturbations

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1993-01-01

    Cosmological topological defects as well as some nonstandard inflation models can give rise to non-Gaussian density perturbations. Skewness and kurtosis are the third and fourth moments that measure the deviation of a distribution from a Gaussian. Measurement of these moments for the cosmological density field and for the microwave background temperature anisotropy can provide a test of the Gaussian nature of the primordial fluctuation spectrum. In the case of the density field, the importance of measuring the kurtosis is stressed since it will be preserved through the weakly nonlinear gravitational evolution epoch. Current constraints on skewness and kurtosis of primeval perturbations are obtained from the observed density contrast on small scales and from recent COBE observations of temperature anisotropies on large scales. It is also shown how, in principle, future microwave anisotropy experiments might be able to reveal the initial skewness and kurtosis. It is shown that present data argue that if the initial spectrum is adiabatic, then it is probably Gaussian, but non-Gaussian isocurvature fluctuations are still allowed, and these are what topological defects provide.

  7. Aberration compensation in a Skew parametric-resonance ionization cooling channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sy, Amy V.; Derbenev, Yaroslav S.; Morozov, Vasiliy

    Skew Parametric-resonance Ionization Cooling (Skew PIC) represents a novel method for focusing of highly divergent particle beams, as in the final 6D cooling stage of a high-luminosity muon collider. In the muon collider concept, the resultant equilibrium transverse emittances from cooling with Skew PIC are an order of magnitude smaller than in conventional ionization cooling. The concept makes use of coupling of the transverse dynamic behavior, and the linear dynamics are well-behaved with good agreement between analytic solutions and simulation results. Compared to the uncoupled system, coupling of the transverse dynamic behavior purports to reduce the number of multipoles requiredmore » for aberration compensation while also avoiding unwanted resonances. Aberration compensation is more complicated in the coupled case, especially in the high-luminosity muon collider application where equilibrium angular spreads in the cooling channel are on the order of 200 mrad. We present recent progress on aberration compensation for control of highly divergent muon beams in the coupled correlated optics channel, and a simple cooling model to test the transverse acceptance of the channel.« less

  8. Determining the role of skewed X-chromosome inactivation in developing muscle symptoms in carriers of Duchenne muscular dystrophy.

    PubMed

    Viggiano, Emanuela; Ergoli, Manuela; Picillo, Esther; Politano, Luisa

    2016-07-01

    Duchenne and Becker dystrophinopathies (DMD and BMD) are X-linked recessive disorders caused by mutations in the dystrophin gene that lead to absent or reduced expression of dystrophin in both skeletal and heart muscles. DMD/BMD female carriers are usually asymptomatic, although about 8 % may exhibit muscle or cardiac symptoms. Several mechanisms leading to a reduced dystrophin have been hypothesized to explain the clinical manifestations and, in particular, the role of the skewed XCI is questioned. In this review, the mechanism of XCI and its involvement in the phenotype of BMD/DMD carriers with both a normal karyotype or with X;autosome translocations with breakpoints at Xp21 (locus of the DMD gene) will be analyzed. We have previously observed that DMD carriers with moderate/severe muscle involvement, exhibit a moderate or extremely skewed XCI, in particular if presenting with an early onset of symptoms, while DMD carriers with mild muscle involvement present a random XCI. Moreover, we found that among 87.1 % of the carriers with X;autosome translocations involving the locus Xp21 who developed signs and symptoms of dystrophinopathy such as proximal muscle weakness, difficulty to run, jump and climb stairs, 95.2 % had a skewed XCI pattern in lymphocytes. These data support the hypothesis that skewed XCI is involved in the onset of phenotype in DMD carriers, the X chromosome carrying the normal DMD gene being preferentially inactivated and leading to a moderate-severe muscle involvement.

  9. Asian Zika virus strains target CD14+ blood monocytes and induce M2-skewed immunosuppression during pregnancy

    PubMed Central

    Foo, Suan-Sin; Chen, Weiqiang; Chan, Yen; Bowman, James W.; Chang, Lin-Chun; Choi, Younho; Yoo, Ji Seung; Ge, Jianning; Cheng, Genhong; Bonnin, Alexandre; Nielsen-Saines, Karin; Brasil, Patrícia; Jung, Jae U.

    2017-01-01

    Blood CD14+ monocytes are the frontline immunomodulators categorized into classical, intermediate or non-classical subsets, subsequently differentiating into M1 pro- or M2 anti-inflammatory macrophages upon stimulation. While Zika virus (ZIKV) rapidly establishes viremia, the target cells and immune responses, particularly during pregnancy, remain elusive. Furthermore, it is unknown whether African- and Asian-lineage ZIKV have different phenotypic impacts on host immune responses. Using human blood infection, we identified CD14+ monocytes as the primary target for African- or Asian-lineage ZIKV infection. When immunoprofiles of human blood infected with ZIKV were compared, a classical/intermediate monocyte-mediated M1-skewed inflammation by African-lineage ZIKV infection was observed, in contrast to a non-classical monocyte-mediated M2-skewed immunosuppression by Asian-lineage ZIKV infection. Importantly, infection of pregnant women’s blood revealed enhanced susceptibility to ZIKV infection. Specifically, Asian-lineage ZIKV infection of pregnant women’s blood led to an exacerbated M2-skewed immunosuppression of non-classical monocytes in conjunction with global suppression of type I interferon-signaling pathway and an aberrant expression of host genes associated with pregnancy complications. 30 ZIKV+ sera from symptomatic pregnant patients also showed elevated levels of M2-skewed immunosuppressive cytokines and pregnancy complication-associated fibronectin-1. This study demonstrates the differential immunomodulatory responses of blood monocytes, particularly during pregnancy, upon infection with different lineages of ZIKV. PMID:28827581

  10. Bayesian WLS/GLS regression for regional skewness analysis for regions with large crest stage gage networks

    USGS Publications Warehouse

    Veilleux, Andrea G.; Stedinger, Jery R.; Eash, David A.

    2012-01-01

    This paper summarizes methodological advances in regional log-space skewness analyses that support flood-frequency analysis with the log Pearson Type III (LP3) distribution. A Bayesian Weighted Least Squares/Generalized Least Squares (B-WLS/B-GLS) methodology that relates observed skewness coefficient estimators to basin characteristics in conjunction with diagnostic statistics represents an extension of the previously developed B-GLS methodology. B-WLS/B-GLS has been shown to be effective in two California studies. B-WLS/B-GLS uses B-WLS to generate stable estimators of model parameters and B-GLS to estimate the precision of those B-WLS regression parameters, as well as the precision of the model. The study described here employs this methodology to develop a regional skewness model for the State of Iowa. To provide cost effective peak-flow data for smaller drainage basins in Iowa, the U.S. Geological Survey operates a large network of crest stage gages (CSGs) that only record flow values above an identified recording threshold (thus producing a censored data record). CSGs are different from continuous-record gages, which record almost all flow values and have been used in previous B-GLS and B-WLS/B-GLS regional skewness studies. The complexity of analyzing a large CSG network is addressed by using the B-WLS/B-GLS framework along with the Expected Moments Algorithm (EMA). Because EMA allows for the censoring of low outliers, as well as the use of estimated interval discharges for missing, censored, and historic data, it complicates the calculations of effective record length (and effective concurrent record length) used to describe the precision of sample estimators because the peak discharges are no longer solely represented by single values. Thus new record length calculations were developed. The regional skewness analysis for the State of Iowa illustrates the value of the new B-WLS/BGLS methodology with these new extensions.

  11. Application of the Convolution Formalism to the Ocean Tide Potential: Results from the Gravity and Recovery and Climate Experiment (GRACE)

    NASA Technical Reports Server (NTRS)

    Desai, S. D.; Yuan, D. -N.

    2006-01-01

    A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.

  12. The VLSI design of an error-trellis syndrome decoder for certain convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.

    1986-01-01

    A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.

  13. System Design for FEC in Aeronautical Telemetry

    DTIC Science & Technology

    2012-03-12

    rate punctured convolutional codes for soft decision Viterbi...below follows that given in [8]. The final coding rate of exactly 2/3 is achieved by puncturing the rate -1/2 code as follows. We begin with the buffer c1...concatenated convolutional code (SCCC). The contributions of this paper are on the system-design level. One major contribution is to design a SCCC code

  14. Convolutional coding results for the MVM '73 X-band telemetry experiment

    NASA Technical Reports Server (NTRS)

    Layland, J. W.

    1978-01-01

    Results of simulation of several short-constraint-length convolutional codes using a noisy symbol stream obtained via the turnaround ranging channels of the MVM'73 spacecraft are presented. First operational use of this coding technique is on the Voyager mission. The relative performance of these codes in this environment is as previously predicted from computer-based simulations.

  15. The VLSI design of error-trellis syndrome decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Jensen, J. M.; Truong, T. K.; Hsu, I. S.

    1985-01-01

    A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.

  16. A deep learning method for early screening of lung cancer

    NASA Astrophysics Data System (ADS)

    Zhang, Kunpeng; Jiang, Huiqin; Ma, Ling; Gao, Jianbo; Yang, Xiaopeng

    2018-04-01

    Lung cancer is the leading cause of cancer-related deaths among men. In this paper, we propose a pulmonary nodule detection method for early screening of lung cancer based on the improved AlexNet model. In order to maintain the same image quality as the existing B/S architecture PACS system, we convert the original CT image into JPEG format image by analyzing the DICOM file firstly. Secondly, in view of the large size and complex background of CT chest images, we design the convolution neural network on basis of AlexNet model and sparse convolution structure. At last we train our models on the software named DIGITS which is provided by NVIDIA. The main contribution of this paper is to apply the convolutional neural network for the early screening of lung cancer and improve the screening accuracy by combining the AlexNet model with the sparse convolution structure. We make a series of experiments on the chest CT images using the proposed method, of which the sensitivity and specificity indicates that the method presented in this paper can effectively improve the accuracy of early screening of lung cancer and it has certain clinical significance at the same time.

  17. Robust hepatic vessel segmentation using multi deep convolution network

    NASA Astrophysics Data System (ADS)

    Kitrungrotsakul, Titinunt; Han, Xian-Hua; Iwamoto, Yutaro; Foruzan, Amir Hossein; Lin, Lanfen; Chen, Yen-Wei

    2017-03-01

    Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.

  18. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading.

    PubMed

    Li, Siqi; Jiang, Huiyan; Pang, Wenbo

    2017-05-01

    Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Chinese character recognition based on Gabor feature extraction and CNN

    NASA Astrophysics Data System (ADS)

    Xiong, Yudian; Lu, Tongwei; Jiang, Yongyuan

    2018-03-01

    As an important application in the field of text line recognition and office automation, Chinese character recognition has become an important subject of pattern recognition. However, due to the large number of Chinese characters and the complexity of its structure, there is a great difficulty in the Chinese character recognition. In order to solve this problem, this paper proposes a method of printed Chinese character recognition based on Gabor feature extraction and Convolution Neural Network(CNN). The main steps are preprocessing, feature extraction, training classification. First, the gray-scale Chinese character image is binarized and normalized to reduce the redundancy of the image data. Second, each image is convoluted with Gabor filter with different orientations, and the feature map of the eight orientations of Chinese characters is extracted. Third, the feature map through Gabor filters and the original image are convoluted with learning kernels, and the results of the convolution is the input of pooling layer. Finally, the feature vector is used to classify and recognition. In addition, the generalization capacity of the network is improved by Dropout technology. The experimental results show that this method can effectively extract the characteristics of Chinese characters and recognize Chinese characters.

  20. On the growth and form of cortical convolutions

    NASA Astrophysics Data System (ADS)

    Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.

    2016-06-01

    The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.

  1. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.

  2. Defect detection and classification of galvanized stamping parts based on fully convolution neural network

    NASA Astrophysics Data System (ADS)

    Xiao, Zhitao; Leng, Yanyi; Geng, Lei; Xi, Jiangtao

    2018-04-01

    In this paper, a new convolution neural network method is proposed for the inspection and classification of galvanized stamping parts. Firstly, all workpieces are divided into normal and defective by image processing, and then the defective workpieces extracted from the region of interest (ROI) area are input to the trained fully convolutional networks (FCN). The network utilizes an end-to-end and pixel-to-pixel training convolution network that is currently the most advanced technology in semantic segmentation, predicts result of each pixel. Secondly, we mark the different pixel values of the workpiece, defect and background for the training image, and use the pixel value and the number of pixels to realize the recognition of the defects of the output picture. Finally, the defect area's threshold depended on the needs of the project is set to achieve the specific classification of the workpiece. The experiment results show that the proposed method can successfully achieve defect detection and classification of galvanized stamping parts under ordinary camera and illumination conditions, and its accuracy can reach 99.6%. Moreover, it overcomes the problem of complex image preprocessing and difficult feature extraction and performs better adaptability.

  3. Traffic sign recognition based on deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Yin, Shi-hao; Deng, Ji-cai; Zhang, Da-wei; Du, Jing-yuan

    2017-11-01

    Traffic sign recognition (TSR) is an important component of automated driving systems. It is a rather challenging task to design a high-performance classifier for the TSR system. In this paper, we propose a new method for TSR system based on deep convolutional neural network. In order to enhance the expression of the network, a novel structure (dubbed block-layer below) which combines network-in-network and residual connection is designed. Our network has 10 layers with parameters (block-layer seen as a single layer): the first seven are alternate convolutional layers and block-layers, and the remaining three are fully-connected layers. We train our TSR network on the German traffic sign recognition benchmark (GTSRB) dataset. To reduce overfitting, we perform data augmentation on the training images and employ a regularization method named "dropout". The activation function we employ in our network adopts scaled exponential linear units (SELUs), which can induce self-normalizing properties. To speed up the training, we use an efficient GPU to accelerate the convolutional operation. On the test dataset of GTSRB, we achieve the accuracy rate of 99.67%, exceeding the state-of-the-art results.

  4. Accelerated Time-Domain Modeling of Electromagnetic Pulse Excitation of Finite-Length Dissipative Conductors over a Ground Plane via Function Fitting and Recursive Convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campione, Salvatore; Warne, Larry K.; Sainath, Kamalesh

    In this report we overview the fundamental concepts for a pair of techniques which together greatly hasten computational predictions of electromagnetic pulse (EMP) excitation of finite-length dissipative conductors over a ground plane. In a time- domain, transmission line (TL) model implementation, predictions are computationally bottlenecked time-wise, either for late-time predictions (about 100ns-10000ns range) or predictions concerning EMP excitation of long TLs (order of kilometers or more ). This is because the method requires a temporal convolution to account for the losses in the ground. Addressing this to facilitate practical simulation of EMP excitation of TLs, we first apply a techniquemore » to extract an (approximate) complex exponential function basis-fit to the ground/Earth's impedance function, followed by incorporating this into a recursion-based convolution acceleration technique. Because the recursion-based method only requires the evaluation of the most recent voltage history data (versus the entire history in a "brute-force" convolution evaluation), we achieve necessary time speed- ups across a variety of TL/Earth geometry/material scenarios. Intentionally Left Blank« less

  5. On exact solutions for disturbances to the asymptotic suction boundary layer: transformation of Barnes integrals to convolution integrals

    NASA Astrophysics Data System (ADS)

    Russell, John

    2000-11-01

    A modified Orr-Sommerfeld equation that applies to the asymptotic suction boundary layer was reported by Bussmann & Münz in a wartime report dated 1942 and by Hughes & Reid in J.F.M. ( 23, 1965, p715). Fundamental systems of exact solutions of the Orr-Sommerfeld equation for this mean velocity distribution were reported by D. Grohne in an unpublished typescript dated 1950. Exact solutions of the equation of Bussmann, Münz, Hughes, & Reid were reported by P. Baldwin in Mathematika ( 17, 1970, p206). Grohne and Baldwin noticed that these exact solutions may be expressed either as Barnes integrals or as convolution integrals. In a later paper (Phil. Trans. Roy. Soc. A, 399, 1985, p321), Baldwin applied the convolution integrals in the contruction of large-Reynolds number asymptotic approximations that hold uniformly. The present talk discusses the subtleties that arise in the construction of such convolution integrals, including several not reported by Grohne or Baldwin. The aim is to recover the full set of seven solutions (one well balanced, three balanced, and three dominant-recessive) postulated by W.H. Reid in various works on the uniformly valid solutions.

  6. Brief Report: Non-Random X Chromosome Inactivation in Females with Autism

    ERIC Educational Resources Information Center

    Talebizadeh, Z.; Bittel, D. C.; Veatch, O. J.; Kibiryeva, N.; Butler, M. G.

    2005-01-01

    Autism is a heterogeneous neurodevelopmental disorder with a 3-4 times higher sex ratio in males than females. X chromosome genes may contribute to this higher sex ratio through unusual skewing of X chromosome inactivation. We studied X chromosome skewness in 30 females with classical autism and 35 similarly aged unaffected female siblings as…

  7. Behavior analysis of CMOS D flip-flops

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, H.J.; Johnston, C.A.

    1989-10-01

    In this paper, the authors analyze two {ital D} flip-flops (DFF's) generally considered to be the fastest (and most widely used), and compare their speed performance and their robustness against clock skew when a two-phase clocking scheme is applied. The effect of clock skew on their speed and proper logic operation is analyzed and verified with SPICE simulation.

  8. A Monte Carlo Study of Skewed Theta Distributions on DIF Indices.

    ERIC Educational Resources Information Center

    Monaco, Malina

    The effects of skewed theta distributions on indices of differential item functioning (DIF) were studied, comparing Mantel Haenszel (N. Mantel and W. Haenszel, 1959) and DFIT (N. S. Raju, W. J. van der Linden, and P. F. Fleer) (noncompensatory DIF). The significance of the study is that in educational and psychological data, the distributions one…

  9. Dip and anisotropy effects on flow using a vertically skewed model grid.

    PubMed

    Hoaglund, John R; Pollard, David

    2003-01-01

    Darcy flow equations relating vertical and bedding-parallel flow to vertical and bedding-parallel gradient components are derived for a skewed Cartesian grid in a vertical plane, correcting for structural dip given the principal hydraulic conductivities in bedding-parallel and bedding-orthogonal directions. Incorrect-minus-correct flow error results are presented for ranges of structural dip (0 < or = theta < or = 90) and gradient directions (0 < or = phi < or = 360). The equations can be coded into ground water models (e.g., MODFLOW) that can use a skewed Cartesian coordinate system to simulate flow in structural terrain with deformed bedding planes. Models modified with these equations will require input arrays of strike and dip, and a solver that can handle off-diagonal hydraulic conductivity terms.

  10. Oscillatory Reduction in Option Pricing Formula Using Shifted Poisson and Linear Approximation

    NASA Astrophysics Data System (ADS)

    Nur Rachmawati, Ro'fah; Irene; Budiharto, Widodo

    2014-03-01

    Option is one of derivative instruments that can help investors improve their expected return and minimize the risks. However, the Black-Scholes formula is generally used in determining the price of the option does not involve skewness factor and it is difficult to apply in computing process because it produces oscillation for the skewness values close to zero. In this paper, we construct option pricing formula that involve skewness by modified Black-Scholes formula using Shifted Poisson model and transformed it into the form of a Linear Approximation in the complete market to reduce the oscillation. The results are Linear Approximation formula can predict the price of an option with very accurate and successfully reduce the oscillations in the calculation processes.

  11. Combining morphometric features and convolutional networks fusion for glaucoma diagnosis

    NASA Astrophysics Data System (ADS)

    Perdomo, Oscar; Arevalo, John; González, Fabio A.

    2017-11-01

    Glaucoma is an eye condition that leads to loss of vision and blindness. Ophthalmoscopy exam evaluates the shape, color and proportion between the optic disc and physiologic cup, but the lack of agreement among experts is still the main diagnosis problem. The application of deep convolutional neural networks combined with automatic extraction of features such as: the cup-to-disc distance in the four quadrants, the perimeter, area, eccentricity, the major radio, the minor radio in optic disc and cup, in addition to all the ratios among the previous parameters may help with a better automatic grading of glaucoma. This paper presents a strategy to merge morphological features and deep convolutional neural networks as a novel methodology to support the glaucoma diagnosis in eye fundus images.

  12. Deep learning based state recognition of substation switches

    NASA Astrophysics Data System (ADS)

    Wang, Jin

    2018-06-01

    Different from the traditional method which recognize the state of substation switches based on the running rules of electrical power system, this work proposes a novel convolutional neuron network-based state recognition approach of substation switches. Inspired by the theory of transfer learning, we first establish a convolutional neuron network model trained on the large-scale image set ILSVRC2012, then the restricted Boltzmann machine is employed to replace the full connected layer of the convolutional neuron network and trained on our small image dataset of 110kV substation switches to get a stronger model. Experiments conducted on our image dataset of 110kV substation switches show that, the proposed approach can be applicable to the substation to reduce the running cost and implement the real unattended operation.

  13. Hierarchical graphical-based human pose estimation via local multi-resolution convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhu, Aichun; Wang, Tian; Snoussi, Hichem

    2018-03-01

    This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.

  14. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  15. Tera-Ops Processing for ATR

    NASA Technical Reports Server (NTRS)

    Udomkesmalee, Suraphol; Padgett, Curtis; Zhu, David; Lung, Gerald; Howard, Ayanna

    2000-01-01

    A three-dimensional microelectronic device (3DANN-R) capable of performing general image convolution at the speed of 1012 operations/second (ops) in a volume of less than 1.5 cubic centimeter has been successfully built under the BMDO/JPL VIGILANTE program. 3DANN-R was developed in partnership with Irvine Sensors Corp., Costa Mesa, California. 3DANN-R is a sugar-cube-sized, low power image convolution engine that in its core computation circuitry is capable of performing 64 image convolutions with large (64x64) windows at video frame rates. This paper explores potential applications of 3DANN-R such as target recognition, SAR and hyperspectral data processing, and general machine vision using real data and discuss technical challenges for providing deployable systems for BMDO surveillance and interceptor programs.

  16. Hamiltonian Cycle Enumeration via Fermion-Zeon Convolution

    NASA Astrophysics Data System (ADS)

    Staples, G. Stacey

    2017-12-01

    Beginning with a simple graph having finite vertex set V, operators are induced on fermion and zeon algebras by the action of the graph's adjacency matrix and combinatorial Laplacian on the vector space spanned by the graph's vertices. When the graph is simple (undirected with no loops or multiple edges), the matrices are symmetric and the induced operators are self-adjoint. The goal of the current paper is to recover a number of known graph-theoretic results from quantum observables constructed as linear operators on fermion and zeon Fock spaces. By considering an "indeterminate" fermion/zeon Fock space, a fermion-zeon convolution operator is defined whose trace recovers the number of Hamiltonian cycles in the graph. This convolution operator is a quantum observable whose expectation reveals the number of Hamiltonian cycles in the graph.

  17. Adaptive Correlation Model for Visual Tracking Using Keypoints Matching and Deep Convolutional Feature.

    PubMed

    Li, Yuankun; Xu, Tingfa; Deng, Honggao; Shi, Guokai; Guo, Jie

    2018-02-23

    Although correlation filter (CF)-based visual tracking algorithms have achieved appealing results, there are still some problems to be solved. When the target object goes through long-term occlusions or scale variation, the correlation model used in existing CF-based algorithms will inevitably learn some non-target information or partial-target information. In order to avoid model contamination and enhance the adaptability of model updating, we introduce the keypoints matching strategy and adjust the model learning rate dynamically according to the matching score. Moreover, the proposed approach extracts convolutional features from a deep convolutional neural network (DCNN) to accurately estimate the position and scale of the target. Experimental results demonstrate that the proposed tracker has achieved satisfactory performance in a wide range of challenging tracking scenarios.

  18. The Convolutional Visual Network for Identification and Reconstruction of NOvA Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Psihas, Fernanda

    In 2016 the NOvA experiment released results for the observation of oscillations in the vμ and ve channels as well as ve cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identification and reconstruction of the neutrino flavor and energy recorded by our detectors. This presentation describes the first application of convolutional neural network technology for event identification and reconstruction in particle detectors like NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identification, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the ve appearancemore » signal by 40% and studies show potential impact to the vμ disappearance analysis.« less

  19. Applications of deep convolutional neural networks to digitized natural history collections

    PubMed Central

    Frandsen, Paul B.; Dikow, Rebecca B.; Brown, Abel; Orli, Sylvia; Peters, Melinda; Metallo, Adam; Funk, Vicki A.; Dorr, Laurence J.

    2017-01-01

    Abstract Natural history collections contain data that are critical for many scientific endeavors. Recent efforts in mass digitization are generating large datasets from these collections that can provide unprecedented insight. Here, we present examples of how deep convolutional neural networks can be applied in analyses of imaged herbarium specimens. We first demonstrate that a convolutional neural network can detect mercury-stained specimens across a collection with 90% accuracy. We then show that such a network can correctly distinguish two morphologically similar plant families 96% of the time. Discarding the most challenging specimen images increases accuracy to 94% and 99%, respectively. These results highlight the importance of mass digitization and deep learning approaches and reveal how they can together deliver powerful new investigative tools. PMID:29200929

  20. Nonparametric Representations for Integrated Inference, Control, and Sensing

    DTIC Science & Technology

    2015-10-01

    Learning (ICML), 2013. [20] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. DeCAF: A deep ...unlimited. Multi-layer feature learning “SuperVision” Convolutional Neural Network (CNN) ImageNet Classification with Deep Convolutional Neural Networks...to develop a new framework for autonomous operations that will extend the state of the art in distributed learning and modeling from data, and

  1. Distortion of the convolution spectra of PSK signals in frequency multipliers

    NASA Astrophysics Data System (ADS)

    Viniarskii, V. F.; Marchenko, V. F.; Petrin, Iu. M.

    1983-09-01

    The influence of the input and output circuits of frequency multipliers on the convolution spectrum of binary and ternary PSK signals is examined. It is shown that transient processes caused by the phase switching of the input signal lead to the amplitude-phase modulation of the harmonic signal. Experimental results are presented on the balance circuits of MOS varactor doublers and triplers.

  2. Location tests for biomarker studies: a comparison using simulations for the two-sample case.

    PubMed

    Scheinhardt, M O; Ziegler, A

    2013-01-01

    Gene, protein, or metabolite expression levels are often non-normally distributed, heavy tailed and contain outliers. Standard statistical approaches may fail as location tests in this situation. In three Monte-Carlo simulation studies, we aimed at comparing the type I error levels and empirical power of standard location tests and three adaptive tests [O'Gorman, Can J Stat 1997; 25: 269 -279; Keselman et al., Brit J Math Stat Psychol 2007; 60: 267- 293; Szymczak et al., Stat Med 2013; 32: 524 - 537] for a wide range of distributions. We simulated two-sample scenarios using the g-and-k-distribution family to systematically vary tail length and skewness with identical and varying variability between groups. All tests kept the type I error level when groups did not vary in their variability. The standard non-parametric U-test performed well in all simulated scenarios. It was outperformed by the two non-parametric adaptive methods in case of heavy tails or large skewness. Most tests did not keep the type I error level for skewed data in the case of heterogeneous variances. The standard U-test was a powerful and robust location test for most of the simulated scenarios except for very heavy tailed or heavy skewed data, and it is thus to be recommended except for these cases. The non-parametric adaptive tests were powerful for both normal and non-normal distributions under sample variance homogeneity. But when sample variances differed, they did not keep the type I error level. The parametric adaptive test lacks power for skewed and heavy tailed distributions.

  3. New Views on Strand Asymmetry in Insect Mitochondrial Genomes

    PubMed Central

    Wei, Shu-Jun; Shi, Min; Chen, Xue-Xin; Sharkey, Michael J.; van Achterberg, Cornelis; Ye, Gong-Yin; He, Jun-Hua

    2010-01-01

    Strand asymmetry in nucleotide composition is a remarkable feature of animal mitochondrial genomes. Understanding the mutation processes that shape strand asymmetry is essential for comprehensive knowledge of genome evolution, demographical population history and accurate phylogenetic inference. Previous studies found that the relative contributions of different substitution types to strand asymmetry are associated with replication alone or both replication and transcription. However, the relative contributions of replication and transcription to strand asymmetry remain unclear. Here we conducted a broad survey of strand asymmetry across 120 insect mitochondrial genomes, with special reference to the correlation between the signs of skew values and replication orientation/gene direction. The results show that the sign of GC skew on entire mitochondrial genomes is reversed in all species of three distantly related families of insects, Philopteridae (Phthiraptera), Aleyrodidae (Hemiptera) and Braconidae (Hymenoptera); the replication-related elements in the A+T-rich regions of these species are inverted, confirming that reversal of strand asymmetry (GC skew) was caused by inversion of replication origin; and finally, the sign of GC skew value is associated with replication orientation but not with gene direction, while that of AT skew value varies with gene direction, replication and codon positions used in analyses. These findings show that deaminations during replication and other mutations contribute more than selection on amino acid sequences to strand compositions of G and C, and that the replication process has a stronger affect on A and T content than does transcription. Our results may contribute to genome-wide studies of replication and transcription mechanisms. PMID:20856815

  4. Development of Nonword and Irregular Word Lists for Australian Grade 3 Students Using Rasch Analysis

    ERIC Educational Resources Information Center

    Callinan, Sarah; Cunningham, Everarda; Theiler, Stephen

    2014-01-01

    Many tests used in educational settings to identify learning difficulties endeavour to pick up only the lowest performers. Yet these tests are generally developed within a Classical Test Theory (CTT) paradigm that assumes that data do not have significant skew. Rasch analysis is more tolerant of skew and was used to validate two newly developed…

  5. Uncertainty relations with the generalized Wigner-Yanase-Dyson skew information

    NASA Astrophysics Data System (ADS)

    Fan, Yajing; Cao, Huaixin; Wang, Wenhua; Meng, Huixian; Chen, Liang

    2018-07-01

    The uncertainty principle in quantum mechanics is a fundamental relation with different forms, including Heisenberg's uncertainty relation and Schrödinger's uncertainty relation. We introduce the generalized Wigner-Yanase-Dyson correlation and the related quantities. Various properties of them are discussed. Finally, we establish several generalizations of uncertainty relation expressed in terms of the generalized Wigner-Yanase-Dyson skew information.

  6. Environmental Support for High Frequency Acoustic Measurements at NOSC (Naval Ocean Systems Center) Oceanographic Tower, 26 April-7 May 1982. Part 1. Sediment Geoacoustic Properties

    DTIC Science & Technology

    1983-06-01

    surrounding the tripod (cores 14, 15, 16) were moderately to moderately well-sorted, fine-skewed, platykurtic to mesokurtic coarse grained sands, The...side of the interface. mrt Sediments throughout the 18 cm length of core 7 were moderately sorted, very finely-skewed, platykurtic to mesokurtic

  7. Social Justice and South African University Student Enrolment Data by "Race", 1998-2012: From "Skewed Revolution" to "Stalled Revolution"

    ERIC Educational Resources Information Center

    Cooper, David

    2015-01-01

    The paper looks closely at student enrolment trends through a case study of South African "race" enrolment data, including some hypotheses about how student social class has influenced these trends. First, data on 1988-1998 enrolments showing a "skewed revolution" in student africanisation are summarised. Then, using 2000-2012…

  8. The Use of the Skew T, Log P Diagram in Analysis and Forecasting. Revision

    DTIC Science & Technology

    1990-03-01

    28 x 30 been added to further enhance the value of the inches. This version now includes the Apple - diagram. A detailed description of the Skew T, man...airocrau rqor we ovailable. The eauning lIkIaatte U the lop rate Is. at times. recorded as swot - adobaik wheun the mulm leave* a cloud Up and ener

  9. Analysis of Parasite and Other Skewed Counts

    PubMed Central

    Alexander, Neal

    2012-01-01

    Objective To review methods for the statistical analysis of parasite and other skewed count data. Methods Statistical methods for skewed count data are described and compared, with reference to those used over a ten year period of Tropical Medicine and International Health. Two parasitological datasets are used for illustration. Results Ninety papers were identified, 89 with descriptive and 60 with inferential analysis. A lack of clarity is noted in identifying measures of location, in particular the Williams and geometric mean. The different measures are compared, emphasizing the legitimacy of the arithmetic mean for skewed data. In the published papers, the t test and related methods were often used on untransformed data, which is likely to be invalid. Several approaches to inferential analysis are described, emphasizing 1) non-parametric methods, while noting that they are not simply comparisons of medians, and 2) generalized linear modelling, in particular with the negative binomial distribution. Additional methods, such as the bootstrap, with potential for greater use are described. Conclusions Clarity is recommended when describing transformations and measures of location. It is suggested that non-parametric methods and generalized linear models are likely to be sufficient for most analyses. PMID:22943299

  10. MEASUREMENT OF WIND SPEED FROM COOLING LAKE THERMAL IMAGERY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, A; Robert Kurzeja, R; Eliel Villa-Aleman, E

    2009-01-20

    The Savannah River National Laboratory (SRNL) collected thermal imagery and ground truth data at two commercial power plant cooling lakes to investigate the applicability of laboratory empirical correlations between surface heat flux and wind speed, and statistics derived from thermal imagery. SRNL demonstrated in a previous paper [1] that a linear relationship exists between the standard deviation of image temperature and surface heat flux. In this paper, SRNL will show that the skewness of the temperature distribution derived from cooling lake thermal images correlates with instantaneous wind speed measured at the same location. SRNL collected thermal imagery, surface meteorology andmore » water temperatures from helicopters and boats at the Comanche Peak and H. B. Robinson nuclear power plant cooling lakes. SRNL found that decreasing skewness correlated with increasing wind speed, as was the case for the laboratory experiments. Simple linear and orthogonal regression models both explained about 50% of the variance in the skewness - wind speed plots. A nonlinear (logistic) regression model produced a better fit to the data, apparently because the thermal convection and resulting skewness are related to wind speed in a highly nonlinear way in nearly calm and in windy conditions.« less

  11. A Bayesian estimate of the concordance correlation coefficient with skewed data.

    PubMed

    Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir

    2015-01-01

    Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real-life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Using response time distributions to examine top-down influences on attentional capture.

    PubMed

    Burnham, Bryan R

    2013-02-01

    Three experiments examined contingent attentional capture, which is the finding that cuing effects are larger when cues are perceptually similar to a target than when they are dissimilar to the target. This study also analyzed response times (RTs) in terms of the underlying distributions for valid cues and invalid cues. Specifically, an ex-Gaussian analysis and a vincentile analysis examined the influence of top-down attentional control settings on the shift and skew of RT distributions and how the shift and the skew contributed to the cuing effects in the mean RTs. The results showed that cue/target similarity influenced the size of cuing effects. The RT distribution analyses showed that the cuing effects reflected only a shifting effect, not a skewing effect, in the RT distribution between valid cues and invalid cues. That is, top-down attentional control moderated the cuing effects in the mean RTs through distribution shifting, not distribution skewing. The results support the contingent orienting hypothesis (Folk, Remington, & Johnston, Journal of Experimental Psychology: Human Perception and Performance, 18, 1030-1044, 1992) over the attentional disengagement account (Theeuwes, Atchley, & Kramer, 2000) as an explanation for when top-down attentional settings influence the selection of salient stimuli.

  13. Observed, unknown distributions of clinical chemical quantities should be considered to be log-normal: a proposal.

    PubMed

    Haeckel, Rainer; Wosniok, Werner

    2010-10-01

    The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.

  14. Waving and skewing: how gravity and the surface of growth media affect root development in Arabidopsis.

    PubMed

    Oliva, Michele; Dunand, Christophe

    2007-01-01

    Arabidopsis seedlings growing on inclined agar surfaces exhibit characteristic root behaviours called 'waving' and 'skewing': the former consists of a series of undulations, whereas the latter is a deviation from the direction of gravity. Even though the precise basis of these growth patterns is not well understood, both gravity and the contact between the medium and the root are considered to be the major players that result in these processes. The influence of these forces on root surface-dependent behaviours can be verified by growing seedlings at different gel pitches: plants growing on vertical plates present roots with slight waving and skewing when compared with seedlings grown on plates held at minor angles of < 90 degrees . However, other factors are thought to modulate root growth on agar; for instance, it has been demonstrated that the presence and concentration of certain compounds in the medium (such as sucrose) and of drugs able to modify the plant cell cytoskeleton also affect skewing and waving. The recent discovery of an active role of ethylene on surface-dependent root behaviour, and the finding of new mutants showing anomalous growth, pave the way for a more detailed description of these phenomena.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Yu, E-mail: yu.pan@anu.edu.au, E-mail: zibo.miao@anu.edu.au; Miao, Zibo, E-mail: yu.pan@anu.edu.au, E-mail: zibo.miao@anu.edu.au; Amini, Hadis, E-mail: nhamini@stanford.edu

    Quantum Markovian systems, modeled as unitary dilations in the quantum stochastic calculus of Hudson and Parthasarathy, have become standard in current quantum technological applications. This paper investigates the stability theory of such systems. Lyapunov-type conditions in the Heisenberg picture are derived in order to stabilize the evolution of system operators as well as the underlying dynamics of the quantum states. In particular, using the quantum Markov semigroup associated with this quantum stochastic differential equation, we derive sufficient conditions for the existence and stability of a unique and faithful invariant quantum state. Furthermore, this paper proves the quantum invariance principle, whichmore » extends the LaSalle invariance principle to quantum systems in the Heisenberg picture. These results are formulated in terms of algebraic constraints suitable for engineering quantum systems that are used in coherent feedback networks.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumitru, Irina, E-mail: aniri-dum@yahoo.com; Isar, Aurelian

    In the framework of the theory of open systems based on completely positive quantum dynamical semigroups, we give a description of the continuous variable entanglement for a system consisting of two non-interacting bosonic modes embedded in a thermal environment. The calculated measure of entanglement is entanglement of formation. We describe the evolution of entanglement in terms of the covariance matrix for symmetric Gaussian input states. In the case of an entangled initial squeezed thermal state, entanglement suppression (entanglement sudden death) takes place, for all non-zero temperatures of the thermal bath. After that, the system remains for all times in amore » separable state. For a zero temperature of the thermal bath, the system remains entangled for all finite times, but in the limit of asymptotic large times the state becomes separable.« less

  17. A stochastic SIS epidemic model with vaccination

    NASA Astrophysics Data System (ADS)

    Cao, Boqiang; Shan, Meijing; Zhang, Qimin; Wang, Weiming

    2017-11-01

    In this paper, we investigate the basic features of an SIS type infectious disease model with varying population size and vaccinations in presence of environment noise. By applying the Markov semigroup theory, we propose a stochastic reproduction number R0s which can be seen as a threshold parameter to utilize in identifying the stochastic extinction and persistence: If R0s < 1, under some mild extra conditions, there exists a disease-free absorbing set for the stochastic epidemic model, which implies that disease dies out with probability one; while if R0s > 1, under some mild extra conditions, the SDE model has an endemic stationary distribution which results in the stochastic persistence of the infectious disease. The most interesting finding is that large environmental noise can suppress the outbreak of the disease.

  18. Adiabatic markovian dynamics.

    PubMed

    Oreshkov, Ognyan; Calsamiglia, John

    2010-07-30

    We propose a theory of adiabaticity in quantum markovian dynamics based on a decomposition of the Hilbert space induced by the asymptotic behavior of the Lindblad semigroup. A central idea of our approach is that the natural generalization of the concept of eigenspace of the Hamiltonian in the case of markovian dynamics is a noiseless subsystem with a minimal noisy cofactor. Unlike previous attempts to define adiabaticity for open systems, our approach deals exclusively with physical entities and provides a simple, intuitive picture at the Hilbert-space level, linking the notion of adiabaticity to the theory of noiseless subsystems. As two applications of our theory, we propose a general framework for decoherence-assisted computation in noiseless codes and a dissipation-driven approach to holonomic computation based on adiabatic dragging of subsystems that is generally not achievable by nondissipative means.

  19. Control for well-posedness about a class of non-Newtonian incompressible porous medium fluid equations

    NASA Astrophysics Data System (ADS)

    Deng, Shuxian; Ge, Xinxin

    2017-10-01

    Considering the non-Newtonian fluid equation of incompressible porous media, using the properties of operator semigroup and measure space and the principle of squeezed image, Fourier analysis and a priori estimate in the measurement space are used to discuss the non-compressible porous media, the properness of the solution of the equation, its gradual behavior and its topological properties. Through the diffusion regularization method and the compressed limit compact method, we study the overall decay rate of the solution of the equation in a certain space when the initial value is sufficient. The decay estimation of the solution of the incompressible seepage equation is obtained, and the asymptotic behavior of the solution is obtained by using the double regularization model and the Duhamel principle.

  20. Potassium intake modulates the thiazide-sensitive sodium-chloride cotransporter (NCC) activity via the Kir4.1 potassium channel.

    PubMed

    Wang, Ming-Xiao; Cuevas, Catherina A; Su, Xiao-Tong; Wu, Peng; Gao, Zhong-Xiuzi; Lin, Dao-Hong; McCormick, James A; Yang, Chao-Ling; Wang, Wen-Hui; Ellison, David H

    2018-04-01

    Kir4.1 in the distal convoluted tubule plays a key role in sensing plasma potassium and in modulating the thiazide-sensitive sodium-chloride cotransporter (NCC). Here we tested whether dietary potassium intake modulates Kir4.1 and whether this is essential for mediating the effect of potassium diet on NCC. High potassium intake inhibited the basolateral 40 pS potassium channel (a Kir4.1/5.1 heterotetramer) in the distal convoluted tubule, decreased basolateral potassium conductance, and depolarized the distal convoluted tubule membrane in Kcnj10flox/flox mice, herein referred to as control mice. In contrast, low potassium intake activated Kir4.1, increased potassium currents, and hyperpolarized the distal convoluted tubule membrane. These effects of dietary potassium intake on the basolateral potassium conductance and membrane potential in the distal convoluted tubule were completely absent in inducible kidney-specific Kir4.1 knockout mice. Furthermore, high potassium intake decreased, whereas low potassium intake increased the abundance of NCC expression only in the control but not in kidney-specific Kir4.1 knockout mice. Renal clearance studies demonstrated that low potassium augmented, while high potassium diminished, hydrochlorothiazide-induced natriuresis in control mice. Disruption of Kir4.1 significantly increased basal urinary sodium excretion but it abolished the natriuretic effect of hydrochlorothiazide. Finally, hypokalemia and metabolic alkalosis in kidney-specific Kir4.1 knockout mice were exacerbated by potassium restriction and only partially corrected by a high-potassium diet. Thus, Kir4.1 plays an essential role in mediating the effect of dietary potassium intake on NCC activity and potassium homeostasis. Copyright © 2017 International Society of Nephrology. Published by Elsevier Inc. All rights reserved.

  1. A Configurable Event-Driven Convolutional Node with Rate Saturation Mechanism for Modular ConvNet Systems Implementation.

    PubMed

    Camuñas-Mesa, Luis A; Domínguez-Cordero, Yaisel L; Linares-Barranco, Alejandro; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabé

    2018-01-01

    Convolutional Neural Networks (ConvNets) are a particular type of neural network often used for many applications like image recognition, video analysis or natural language processing. They are inspired by the human brain, following a specific organization of the connectivity pattern between layers of neurons known as receptive field. These networks have been traditionally implemented in software, but they are becoming more computationally expensive as they scale up, having limitations for real-time processing of high-speed stimuli. On the other hand, hardware implementations show difficulties to be used for different applications, due to their reduced flexibility. In this paper, we propose a fully configurable event-driven convolutional node with rate saturation mechanism that can be used to implement arbitrary ConvNets on FPGAs. This node includes a convolutional processing unit and a routing element which allows to build large 2D arrays where any multilayer structure can be implemented. The rate saturation mechanism emulates the refractory behavior in biological neurons, guaranteeing a minimum separation in time between consecutive events. A 4-layer ConvNet with 22 convolutional nodes trained for poker card symbol recognition has been implemented in a Spartan6 FPGA. This network has been tested with a stimulus where 40 poker cards were observed by a Dynamic Vision Sensor (DVS) in 1 s time. Different slow-down factors were applied to characterize the behavior of the system for high speed processing. For slow stimulus play-back, a 96% recognition rate is obtained with a power consumption of 0.85 mW. At maximum play-back speed, a traffic control mechanism downsamples the input stimulus, obtaining a recognition rate above 63% when less than 20% of the input events are processed, demonstrating the robustness of the network.

  2. Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images.

    PubMed

    Cheng, Phillip M; Malhi, Harshawn S

    2017-04-01

    The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p < 0.001). The results demonstrate that transfer learning with convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.

  3. A Configurable Event-Driven Convolutional Node with Rate Saturation Mechanism for Modular ConvNet Systems Implementation

    PubMed Central

    Camuñas-Mesa, Luis A.; Domínguez-Cordero, Yaisel L.; Linares-Barranco, Alejandro; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabé

    2018-01-01

    Convolutional Neural Networks (ConvNets) are a particular type of neural network often used for many applications like image recognition, video analysis or natural language processing. They are inspired by the human brain, following a specific organization of the connectivity pattern between layers of neurons known as receptive field. These networks have been traditionally implemented in software, but they are becoming more computationally expensive as they scale up, having limitations for real-time processing of high-speed stimuli. On the other hand, hardware implementations show difficulties to be used for different applications, due to their reduced flexibility. In this paper, we propose a fully configurable event-driven convolutional node with rate saturation mechanism that can be used to implement arbitrary ConvNets on FPGAs. This node includes a convolutional processing unit and a routing element which allows to build large 2D arrays where any multilayer structure can be implemented. The rate saturation mechanism emulates the refractory behavior in biological neurons, guaranteeing a minimum separation in time between consecutive events. A 4-layer ConvNet with 22 convolutional nodes trained for poker card symbol recognition has been implemented in a Spartan6 FPGA. This network has been tested with a stimulus where 40 poker cards were observed by a Dynamic Vision Sensor (DVS) in 1 s time. Different slow-down factors were applied to characterize the behavior of the system for high speed processing. For slow stimulus play-back, a 96% recognition rate is obtained with a power consumption of 0.85 mW. At maximum play-back speed, a traffic control mechanism downsamples the input stimulus, obtaining a recognition rate above 63% when less than 20% of the input events are processed, demonstrating the robustness of the network. PMID:29515349

  4. The discriminatory cost of ICD-10-CM transition between clinical specialties: metrics, case study, and mitigating tools

    PubMed Central

    Boyd, Andrew D; Li, Jianrong ‘John’; Burton, Mike D; Jonen, Michael; Gardeux, Vincent; Achour, Ikbel; Luo, Roger Q; Zenku, Ilir; Bahroos, Neil; Brown, Stephen B; Vanden Hoek, Terry; Lussier, Yves A

    2013-01-01

    Objective Applying the science of networks to quantify the discriminatory impact of the ICD-9-CM to ICD-10-CM transition between clinical specialties. Materials and Methods Datasets were the Center for Medicaid and Medicare Services ICD-9-CM to ICD-10-CM mapping files, general equivalence mappings, and statewide Medicaid emergency department billing. Diagnoses were represented as nodes and their mappings as directional relationships. The complex network was synthesized as an aggregate of simpler motifs and tabulation per clinical specialty. Results We identified five mapping motif categories: identity, class-to-subclass, subclass-to-class, convoluted, and no mapping. Convoluted mappings indicate that multiple ICD-9-CM and ICD-10-CM codes share complex, entangled, and non-reciprocal mappings. The proportions of convoluted diagnoses mappings (36% overall) range from 5% (hematology) to 60% (obstetrics and injuries). In a case study of 24 008 patient visits in 217 emergency departments, 27% of the costs are associated with convoluted diagnoses, with ‘abdominal pain’ and ‘gastroenteritis’ accounting for approximately 3.5%. Discussion Previous qualitative studies report that administrators and clinicians are likely to be challenged in understanding and managing their practice because of the ICD-10-CM transition. We substantiate the complexity of this transition with a thorough quantitative summary per clinical specialty, a case study, and the tools to apply this methodology easily to any clinical practice in the form of a web portal and analytic tables. Conclusions Post-transition, successful management of frequent diseases with convoluted mapping network patterns is critical. The http://lussierlab.org/transition-to-ICD10CM web portal provides insight in linking onerous diseases to the ICD-10 transition. PMID:23645552

  5. Application of carrier testing to genetic counseling for X-linked agammaglobulinemia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, R.C.; Nachtman, R.G.; Belmont, J.W.

    Bruton X-linked agammaglobulinemia (XLA) is a phenotypically recessive genetic disorder of B lymphocyte development. Female carriers of XLA, although asymptomatic, have a characteristic B cell lineage-specific skewing of the pattern of X inactivation. Skewing apparently results from defective growth and maturation of B cell precursors bearing a mutant active X chromosome. In this study, carrier status was tested in 58 women from 22 families referred with a history of agammaglobulinemia. Primary carrier analysis to examine patterns of X inactivation in CD19[sup +] peripheral blood cells (B lymphocytes) was conducted using quantitative PCR at the androgen-receptor locus. Obligate carriers of XLAmore » demonstrated >95% skewing of X inactivation in peripheral blood CD19[sup +] cells but not in CD19[sup [minus

  6. Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.

    PubMed

    Mohan, B M; Sinha, Arpita

    2008-07-01

    This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.

  7. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  8. Transition to collective oscillations in finite Kuramoto ensembles

    NASA Astrophysics Data System (ADS)

    Peter, Franziska; Pikovsky, Arkady

    2018-03-01

    We present an alternative approach to finite-size effects around the synchronization transition in the standard Kuramoto model. Our main focus lies on the conditions under which a collective oscillatory mode is well defined. For this purpose, the minimal value of the amplitude of the complex Kuramoto order parameter appears as a proper indicator. The dependence of this minimum on coupling strength varies due to sampling variations and correlates with the sample kurtosis of the natural frequency distribution. The skewness of the frequency sample determines the frequency of the resulting collective mode. The effects of kurtosis and skewness hold in the thermodynamic limit of infinite ensembles. We prove this by integrating a self-consistency equation for the complex Kuramoto order parameter for two families of distributions with controlled kurtosis and skewness, respectively.

  9. Pulmonary Nodule Classification with Deep Convolutional Neural Networks on Computed Tomography Images.

    PubMed

    Li, Wei; Cao, Peng; Zhao, Dazhe; Wang, Junbo

    2016-01-01

    Computer aided detection (CAD) systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR) in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of nodules, that is, solid, semisolid, and ground glass opacity (GGO). Deep convolutional neural networks are trained by 62,492 regions-of-interest (ROIs) samples including 40,772 nodules and 21,720 nonnodules from the Lung Image Database Consortium (LIDC) database. Experimental results demonstrate the effectiveness of the proposed method in terms of sensitivity and overall accuracy and that it consistently outperforms the competing methods.

  10. Manufacture and quality control of interconnecting wire hardnesses, Volume 1

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A standard is presented for manufacture, installation, and quality control of eight types of interconnecting wire harnesses. The processes, process controls, and inspection and test requirements reflected are based on acknowledgment of harness design requirements, acknowledgment of harness installation requirements, identification of the various parts, materials, etc., utilized in harness manufacture, and formulation of a typical manufacturing flow diagram for identification of each manufacturing and quality control process, operation, inspection, and test. The document covers interconnecting wire harnesses defined in the design standard, including type 1, enclosed in fluorocarbon elastomer convolute, tubing; type 2, enclosed in TFE convolute tubing lines with fiberglass braid; type 3, enclosed in TFE convolute tubing; and type 5, combination of types 3 and 4. Knowledge gained through experience on the Saturn 5 program coupled with recent advances in techniques, materials, and processes was incorporated.

  11. Fully convolutional neural networks for polyp segmentation in colonoscopy

    NASA Astrophysics Data System (ADS)

    Brandao, Patrick; Mazomenos, Evangelos; Ciuti, Gastone; Caliò, Renato; Bianchi, Federico; Menciassi, Arianna; Dario, Paolo; Koulaouzidis, Anastasios; Arezzo, Alberto; Stoyanov, Danail

    2017-03-01

    Colorectal cancer (CRC) is one of the most common and deadliest forms of cancer, accounting for nearly 10% of all forms of cancer in the world. Even though colonoscopy is considered the most effective method for screening and diagnosis, the success of the procedure is highly dependent on the operator skills and level of hand-eye coordination. In this work, we propose to adapt fully convolution neural networks (FCN), to identify and segment polyps in colonoscopy images. We converted three established networks into a fully convolution architecture and fine-tuned their learned representations to the polyp segmentation task. We validate our framework on the 2015 MICCAI polyp detection challenge dataset, surpassing the state-of-the-art in automated polyp detection. Our method obtained high segmentation accuracy and a detection precision and recall of 73.61% and 86.31%, respectively.

  12. FDTD modelling of induced polarization phenomena in transient electromagnetics

    NASA Astrophysics Data System (ADS)

    Commer, Michael; Petrov, Peter V.; Newman, Gregory A.

    2017-04-01

    The finite-difference time-domain scheme is augmented in order to treat the modelling of transient electromagnetic signals containing induced polarization effects from 3-D distributions of polarizable media. Compared to the non-dispersive problem, the discrete dispersive Maxwell system contains costly convolution operators. Key components to our solution for highly digitized model meshes are Debye decomposition and composite memory variables. We revert to the popular Cole-Cole model of dispersion to describe the frequency-dependent behaviour of electrical conductivity. Its inversely Laplace-transformed Debye decomposition results in a series of time convolutions between electric field and exponential decay functions, with the latter reflecting each Debye constituents' individual relaxation time. These function types in the discrete-time convolution allow for their substitution by memory variables, annihilating the otherwise prohibitive computing demands. Numerical examples demonstrate the efficiency and practicality of our algorithm.

  13. Deep Convolutional Neural Network-Based Early Automated Detection of Diabetic Retinopathy Using Fundus Image.

    PubMed

    Xu, Kele; Feng, Dawei; Mi, Haibo

    2017-11-23

    The automatic detection of diabetic retinopathy is of vital importance, as it is the main cause of irreversible vision loss in the working-age population in the developed world. The early detection of diabetic retinopathy occurrence can be very helpful for clinical treatment; although several different feature extraction approaches have been proposed, the classification task for retinal images is still tedious even for those trained clinicians. Recently, deep convolutional neural networks have manifested superior performance in image classification compared to previous handcrafted feature-based image classification methods. Thus, in this paper, we explored the use of deep convolutional neural network methodology for the automatic classification of diabetic retinopathy using color fundus image, and obtained an accuracy of 94.5% on our dataset, outperforming the results obtained by using classical approaches.

  14. Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network.

    PubMed

    Kang, Eunhee; Chang, Won; Yoo, Jaejun; Ye, Jong Chul

    2018-06-01

    Model-based iterative reconstruction algorithms for low-dose X-ray computed tomography (CT) are computationally expensive. To address this problem, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the textures were not fully recovered. To address this problem, here we propose a novel framelet-based denoising algorithm using wavelet residual network which synergistically combines the expressive power of deep learning and the performance guarantee from the framelet-based denoising algorithms. The new algorithms were inspired by the recent interpretation of the deep CNN as a cascaded convolution framelet signal representation. Extensive experimental results confirm that the proposed networks have significantly improved performance and preserve the detail texture of the original images.

  15. Attachment of Free Filament Thermocouples for Temperature Measurements on Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Lei, Jih-Fen; Cuy, Michael D.; Wnuk, Stephen P.

    1998-01-01

    At the NASA Lewis Research Center, a new installation technique utilizing convoluted wire thermocouples (TC's) was developed and proven to produce very good adhesion on CMC's, even in a burner rig environment. Because of their unique convoluted design, such TC's of various types and sizes adhere to flat or curved CMC specimens with no sign of delamination, open circuits, or interactions-even after testing in a Mach 0.3 burner rig to 1200 C (2200 F) for several thermal cycles and at several hours at high temperatures. Large differences in thermal expansion between metal thermocouples and low-expansion materials, such as CMC's, normally generate large stresses in the wires. These stresses cause straight wires to detach, but convoluted wires that are bonded with strips of coating allow bending in the unbonded portion to relieve these expansion stresses.

  16. Time Reversal Methods for Structural Health Monitoring of Metallic Structures Using Guided Waves

    DTIC Science & Technology

    2011-09-01

    measure elastic properties of thin isotropic materials and laminated composite plates. Two types of waves propagate a symmetric wave and antisymmetric...compare it to the original signal. In this time reversal procedure wave propagation from point-A to point-B and can be modeled as a convolution ...where * is the convolution operator and transducer transmit and receive transfer function are neglected for simplification. In the frequency

  17. Offline signature verification using convolution Siamese network

    NASA Astrophysics Data System (ADS)

    Xing, Zi-Jian; Yin, Fei; Wu, Yi-Chao; Liu, Cheng-Lin

    2018-04-01

    This paper presents an offline signature verification approach using convolutional Siamese neural network. Unlike the existing methods which consider feature extraction and metric learning as two independent stages, we adopt a deepleaning based framework which combines the two stages together and can be trained end-to-end. The experimental results on two offline public databases (GPDSsynthetic and CEDAR) demonstrate the superiority of our method on the offline signature verification problem.

  18. Long-term Recurrent Convolutional Networks for Visual Recognition and Description

    DTIC Science & Technology

    2014-11-17

    deep???, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large...models which are also recurrent, or “temporally deep”, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent...limitation of simple RNN models which strictly integrate state information over time is known as the “vanishing gradient” effect : the ability to

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golfinopoulos, A.; Soupioni, M.; Kanellaki, M.

    The effect of initial lactose concentration on lactose uptake rate by kefir free cells, during the lactose fermentation, was studied in this work. For the investigation {sup 14}C-labelled lactose was used due to the fact that labeled and unlabeled molecules are fermented in the same way. The results illustrated lactose uptake rates are about up to two fold higher at lower initial (convolution sign)Be densities as compared with higher initial (convolution sign)Be densities.

  20. Hardware accelerator of convolution with exponential function for image processing applications

    NASA Astrophysics Data System (ADS)

    Panchenko, Ivan; Bucha, Victor

    2015-12-01

    In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.

Top