NASA Astrophysics Data System (ADS)
Mercan, Kadir; Demir, Çiǧdem; Civalek, Ömer
2016-01-01
In the present manuscript, free vibration response of circular cylindrical shells with functionally graded material (FGM) is investigated. The method of discrete singular convolution (DSC) is used for numerical solution of the related governing equation of motion of FGM cylindrical shell. The constitutive relations are based on the Love's first approximation shell theory. The material properties are graded in the thickness direction according to a volume fraction power law indexes. Frequency values are calculated for different types of boundary conditions, material and geometric parameters. In general, close agreement between the obtained results and those of other researchers has been found.
On the continuity of the stationary state distribution of DPCM
NASA Astrophysics Data System (ADS)
Naraghi-Pour, Morteza; Neuhoff, David L.
1990-03-01
Continuity and singularity properties of the stationary state distribution of differential pulse code modulation (DPCM) are explored. Two-level DPCM (i.e., delta modulation) operating on a first-order autoregressive source is considered, and it is shown that, when the magnitude of the DPCM prediciton coefficient is between zero and one-half, the stationary state distribution is singularly continuous; i.e., it is not discrete but concentrates on an uncountable set with a Lebesgue measure of zero. Consequently, it cannot be represented with a probability density function. For prediction coefficients with magnitude greater than or equal to one-half, the distribution is pure, i.e., either absolutely continuous and representable with a density function, or singular. This problem is compared to the well-known and still substantially unsolved problem of symmetric Bernoulli convolutions.
A separable two-dimensional discrete Hartley transform
NASA Technical Reports Server (NTRS)
Watson, A. B.; Poirson, A.
1985-01-01
Bracewell has proposed the Discrete Hartley Transform (DHT) as a substitute for the Discrete Fourier Transform (DFT), particularly as a means of convolution. Here, it is shown that the most natural extension of the DHT to two dimensions fails to be separate in the two dimensions, and is therefore inefficient. An alternative separable form is considered, corresponding convolution theorem is derived. That the DHT is unlikely to provide faster convolution than the DFT is also discussed.
Linear diffusion-wave channel routing using a discrete Hayami convolution method
Li Wang; Joan Q. Wu; William J. Elliot; Fritz R. Feidler; Sergey Lapin
2014-01-01
The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces...
Foltz, T M; Welsh, B M
1999-01-01
This paper uses the fact that the discrete Fourier transform diagonalizes a circulant matrix to provide an alternate derivation of the symmetric convolution-multiplication property for discrete trigonometric transforms. Derived in this manner, the symmetric convolution-multiplication property extends easily to multiple dimensions using the notion of block circulant matrices and generalizes to multidimensional asymmetric sequences. The symmetric convolution of multidimensional asymmetric sequences can then be accomplished by taking the product of the trigonometric transforms of the sequences and then applying an inverse trigonometric transform to the result. An example is given of how this theory can be used for applying a two-dimensional (2-D) finite impulse response (FIR) filter with nonlinear phase which models atmospheric turbulence.
The boundary element method applied to 3D magneto-electro-elastic dynamic problems
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Markov, I. P.; Kuznetsov, Iu A.
2017-11-01
Due to the coupling properties, the magneto-electro-elastic materials possess a wide number of applications. They exhibit general anisotropic behaviour. Three-dimensional transient analyses of magneto-electro-elastic solids can hardly be found in the literature. 3D direct boundary element formulation based on the weakly-singular boundary integral equations in Laplace domain is presented in this work for solving dynamic linear magneto-electro-elastic problems. Integral expressions of the three-dimensional fundamental solutions are employed. Spatial discretization is based on a collocation method with mixed boundary elements. Convolution quadrature method is used as a numerical inverse Laplace transform scheme to obtain time domain solutions. Numerical examples are provided to illustrate the capability of the proposed approach to treat highly dynamic problems.
A Geometric Construction of Cyclic Cocycles on Twisted Convolution Algebras
NASA Astrophysics Data System (ADS)
Angel, Eitan
2010-09-01
In this thesis we give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. In his seminal book, Connes constructs a map from the equivariant cohomology of a manifold carrying the action of a discrete group into the periodic cyclic cohomology of the associated convolution algebra. Furthermore, for proper étale groupoids, J.-L. Tu and P. Xu provide a map between the periodic cyclic cohomology of a gerbe twisted convolution algebra and twisted cohomology groups. Our focus will be the convolution algebra with a product defined by a gerbe over a discrete translation groupoid. When the action is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial notions related to ideas of J. Dupont to construct a simplicial form representing the Dixmier-Douady class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial Dixmier-Douady form to the mixed bicomplex of certain matrix algebras. Finally, we define a morphism from this complex to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.
Oscillatory singular integrals and harmonic analysis on nilpotent groups
Ricci, F.; Stein, E. M.
1986-01-01
Several related classes of operators on nilpotent Lie groups are considered. These operators involve the following features: (i) oscillatory factors that are exponentials of imaginary polynomials, (ii) convolutions with singular kernels supported on lower-dimensional submanifolds, (iii) validity in the general context not requiring the existence of dilations that are automorphisms. PMID:16593640
Singular perturbation and time scale approaches in discrete control systems
NASA Technical Reports Server (NTRS)
Naidu, D. S.; Price, D. B.
1988-01-01
After considering a singularly perturbed discrete control system, a singular perturbation approach is used to obtain outer and correction subsystems. A time scale approach is then applied via block diagonalization transformations to decouple the system into slow and fast subsystems. To a zeroth-order approximation, the singular perturbation and time-scale approaches are found to yield equivalent results.
Properties of the Magnitude Terms of Orthogonal Scaling Functions.
Tay, Peter C; Havlicek, Joseph P; Acton, Scott T; Hossack, John A
2010-09-01
The spectrum of the convolution of two continuous functions can be determined as the continuous Fourier transform of the cross-correlation function. The same can be said about the spectrum of the convolution of two infinite discrete sequences, which can be determined as the discrete time Fourier transform of the cross-correlation function of the two sequences. In current digital signal processing, the spectrum of the contiuous Fourier transform and the discrete time Fourier transform are approximately determined by numerical integration or by densely taking the discrete Fourier transform. It has been shown that all three transforms share many analogous properties. In this paper we will show another useful property of determining the spectrum terms of the convolution of two finite length sequences by determining the discrete Fourier transform of the modified cross-correlation function. In addition, two properties of the magnitude terms of orthogonal wavelet scaling functions are developed. These properties are used as constraints for an exhaustive search to determine an robust lower bound on conjoint localization of orthogonal scaling functions.
NASA Astrophysics Data System (ADS)
Wu, Leyuan
2018-01-01
We present a brief review of gravity forward algorithms in Cartesian coordinate system, including both space-domain and Fourier-domain approaches, after which we introduce a truly general and efficient algorithm, namely the convolution-type Gauss fast Fourier transform (Conv-Gauss-FFT) algorithm, for 2D and 3D modeling of gravity potential and its derivatives due to sources with arbitrary geometry and arbitrary density distribution which are defined either by discrete or by continuous functions. The Conv-Gauss-FFT algorithm is based on the combined use of a hybrid rectangle-Gaussian grid and the fast Fourier transform (FFT) algorithm. Since the gravity forward problem in Cartesian coordinate system can be expressed as continuous convolution-type integrals, we first approximate the continuous convolution by a weighted sum of a series of shifted discrete convolutions, and then each shifted discrete convolution, which is essentially a Toeplitz system, is calculated efficiently and accurately by combining circulant embedding with the FFT algorithm. Synthetic and real model tests show that the Conv-Gauss-FFT algorithm can obtain high-precision forward results very efficiently for almost any practical model, and it works especially well for complex 3D models when gravity fields on large 3D regular grids are needed.
Serang, Oliver
2015-08-01
Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.
NASA Astrophysics Data System (ADS)
Schanz, Martin; Ye, Wenjing; Xiao, Jinyou
2016-04-01
Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.
Fronts in extended systems of bistable maps coupled via convolutions
NASA Astrophysics Data System (ADS)
Coutinho, Ricardo; Fernandez, Bastien
2004-01-01
An analysis of front dynamics in discrete time and spatially extended systems with general bistable nonlinearity is presented. The spatial coupling is given by the convolution with distribution functions. It allows us to treat in a unified way discrete, continuous or partly discrete and partly continuous diffusive interactions. We prove the existence of fronts and the uniqueness of their velocity. We also prove that the front velocity depends continuously on the parameters of the system. Finally, we show that every initial configuration that is an interface between the stable phases propagates asymptotically with the front velocity.
A fast complex integer convolution using a hybrid transform
NASA Technical Reports Server (NTRS)
Reed, I. S.; K Truong, T.
1978-01-01
It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.
Integrable mappings and the notion of anticonfinement
NASA Astrophysics Data System (ADS)
Mase, T.; Willox, R.; Ramani, A.; Grammaticos, B.
2018-06-01
We examine the notion of anticonfinement and the role it has to play in the singularity analysis of discrete systems. A singularity is said to be anticonfined if singular values continue to arise indefinitely for the forward and backward iterations of a mapping, with only a finite number of iterates taking regular values in between. We show through several concrete examples that the behaviour of some anticonfined singularities is strongly related to the integrability properties of the discrete mappings in which they arise, and we explain how to use this information to decide on the integrability or non-integrability of the mapping.
NASA Technical Reports Server (NTRS)
Maskew, B.
1976-01-01
A discrete singularity method has been developed for calculating the potential flow around two-dimensional airfoils. The objective was to calculate velocities at any arbitrary point in the flow field, including points that approach the airfoil surface. That objective was achieved and is demonstrated here on a Joukowski airfoil. The method used combined vortices and sources ''submerged'' a small distance below the airfoil surface and incorporated a near-field subvortex technique developed earlier. When a velocity calculation point approached the airfoil surface, the number of discrete singularities effectively increased (but only locally) to keep the point just outside the error region of the submerged singularity discretization. The method could be extended to three dimensions, and should improve nonlinear methods, which calculate interference effects between multiple wings, and which include the effects of force-free trailing vortex sheets. The capability demonstrated here would extend the scope of such calculations to allow the close approach of wings and vortex sheets (or vortices).
NASA Astrophysics Data System (ADS)
Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž
2015-03-01
The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.
Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L
1997-04-25
A statistical approach to analysis of amplitude fluctuations of postsynaptic responses is described. This includes (1) using a L1-metric in the space of distribution functions for minimisation with application of linear programming methods to decompose amplitude distributions into a convolution of Gaussian and discrete distributions; (2) deconvolution of the resulting discrete distribution with determination of the release probabilities and the quantal amplitude for cases with a small number (< 5) of discrete components. The methods were tested against simulated data over a range of sample sizes and signal-to-noise ratios which mimicked those observed in physiological experiments. In computer simulation experiments, comparisons were made with other methods of 'unconstrained' (generalized) and constrained reconstruction of discrete components from convolutions. The simulation results provided additional criteria for improving the solutions to overcome 'over-fitting phenomena' and to constrain the number of components with small probabilities. Application of the programme to recordings from hippocampal neurones demonstrated its usefulness for the analysis of amplitude distributions of postsynaptic responses.
FDTD modelling of induced polarization phenomena in transient electromagnetics
NASA Astrophysics Data System (ADS)
Commer, Michael; Petrov, Peter V.; Newman, Gregory A.
2017-04-01
The finite-difference time-domain scheme is augmented in order to treat the modelling of transient electromagnetic signals containing induced polarization effects from 3-D distributions of polarizable media. Compared to the non-dispersive problem, the discrete dispersive Maxwell system contains costly convolution operators. Key components to our solution for highly digitized model meshes are Debye decomposition and composite memory variables. We revert to the popular Cole-Cole model of dispersion to describe the frequency-dependent behaviour of electrical conductivity. Its inversely Laplace-transformed Debye decomposition results in a series of time convolutions between electric field and exponential decay functions, with the latter reflecting each Debye constituents' individual relaxation time. These function types in the discrete-time convolution allow for their substitution by memory variables, annihilating the otherwise prohibitive computing demands. Numerical examples demonstrate the efficiency and practicality of our algorithm.
Elementary exact calculations of degree growth and entropy for discrete equations.
Halburd, R G
2017-05-01
Second-order discrete equations are studied over the field of rational functions [Formula: see text], where z is a variable not appearing in the equation. The exact degree of each iterate as a function of z can be calculated easily using the standard calculations that arise in singularity confinement analysis, even when the singularities are not confined. This produces elementary yet rigorous entropy calculations.
Small-kernel, constrained least-squares restoration of sampled image data
NASA Technical Reports Server (NTRS)
Hazra, Rajeeb; Park, Stephen K.
1992-01-01
Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.
Singularity-free dislocation dynamics with strain gradient elasticity
NASA Astrophysics Data System (ADS)
Po, Giacomo; Lazar, Markus; Seif, Dariush; Ghoniem, Nasr
2014-08-01
The singular nature of the elastic fields produced by dislocations presents conceptual challenges and computational difficulties in the implementation of discrete dislocation-based models of plasticity. In the context of classical elasticity, attempts to regularize the elastic fields of discrete dislocations encounter intrinsic difficulties. On the other hand, in gradient elasticity, the issue of singularity can be removed at the outset and smooth elastic fields of dislocations are available. In this work we consider theoretical and numerical aspects of the non-singular theory of discrete dislocation loops in gradient elasticity of Helmholtz type, with interest in its applications to three dimensional dislocation dynamics (DD) simulations. The gradient solution is developed and compared to its singular and non-singular counterparts in classical elasticity using the unified framework of eigenstrain theory. The fundamental equations of curved dislocation theory are given as non-singular line integrals suitable for numerical implementation using fast one-dimensional quadrature. These include expressions for the interaction energy between two dislocation loops and the line integral form of the generalized solid angle associated with dislocations having a spread core. The single characteristic length scale of Helmholtz elasticity is determined from independent molecular statics (MS) calculations. The gradient solution is implemented numerically within our variational formulation of DD, with several examples illustrating the viability of the non-singular solution. The displacement field around a dislocation loop is shown to be smooth, and the loop self-energy non-divergent, as expected from atomic configurations of crystalline materials. The loop nucleation energy barrier and its dependence on the applied shear stress are computed and shown to be in good agreement with atomistic calculations. DD simulations of Lome-Cottrell junctions in Al show that the strength of the junction and its configuration are easily obtained, without ad-hoc regularization of the singular fields. Numerical convergence studies related to the implementation of the non-singular theory in DD are presented.
NASA Technical Reports Server (NTRS)
Reichelt, Mark
1993-01-01
In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.
NASA Technical Reports Server (NTRS)
Nixon, Douglas D.
2009-01-01
Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.
Dynamical quantum phase transitions in discrete time crystals
NASA Astrophysics Data System (ADS)
Kosior, Arkadiusz; Sacha, Krzysztof
2018-05-01
Discrete time crystals are related to nonequilibrium dynamics of periodically driven quantum many-body systems where the discrete time-translation symmetry of the Hamiltonian is spontaneously broken into another discrete symmetry. Recently, the concept of phase transitions has been extended to nonequilibrium dynamics of time-independent systems induced by a quantum quench, i.e., a sudden change of some parameter of the Hamiltonian. There, the return probability of a system to the ground state reveals singularities in time which are dubbed dynamical quantum phase transitions. We show that the quantum quench in a discrete time crystal leads to dynamical quantum phase transitions where the return probability of a periodically driven system to a Floquet eigenstate before the quench reveals singularities in time. It indicates that dynamical quantum phase transitions are not restricted to time-independent systems and can be also observed in systems that are periodically driven. We discuss how the phenomenon can be observed in ultracold atomic gases.
Contracting singular horseshoe
NASA Astrophysics Data System (ADS)
Morales, C. A.; San Martín, B.
2017-11-01
We suggest a notion of hyperbolicity adapted to the geometric Rovella attractor (Robinson 2012 An Introduction to Dynamical Systems—Continuous and Discrete (Pure and Applied Undergraduate Texts vol 19) 2nd edn (Providence, RI: American Mathematical Society)) . More precisely, we call a partially hyperbolic set asymptotically sectional-hyperbolic if its singularities are hyperbolic and if its central subbundle is asymptotically sectional expanding outside the stable manifolds of the singularities. We prove that there are highly chaotic flows with Rovella-like singularities exhibiting this kind of hyperbolicity. We shall call them contracting singular horseshoes.
Generation of fractional acoustic vortex with a discrete Archimedean spiral structure plate
NASA Astrophysics Data System (ADS)
Jia, Yu-Rou; Wei, Qi; Wu, Da-Jian; Xu, Zheng; Liu, Xiao-Jun
2018-04-01
Artificial structure plates engraved with discrete Archimedean spiral slits have been well designed to achieve fractional acoustic vortices (FAVs). The phase and pressure field distributions of FAVs are investigated theoretically and demonstrated numerically. It is found that the phase singularities relating to the integer and fractional parts of the topological charge (TC) result in dark spots in the upper half of the pressure field profile and a low-intensity stripe in the lower half of the pressure field profile, respectively. The dynamic progress of the FAV is also discussed in detail as TC increases from 1 to 2. With increasing TC from 1 to 1.5, the splitting of the phase singularity leads to the deviation of the phase of the FAV from the integer case and hence a new phase singularity occurs. As TC m increases from 1.5 to 2, two phase singularities of the FAV approach together and finally merge as a new central phase singularity. We further perform an experiment based on the Schlieren method to demonstrate the generation of the FAV.
Non-free gas of dipoles of non-singular screw dislocations and the shear modulus near the melting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyshev, Cyril, E-mail: malyshev@pdmi.ras.ru
2014-12-15
The behavior of the shear modulus caused by proliferation of dipoles of non-singular screw dislocations with finite-sized core is considered. The representation of two-dimensional Coulomb gas with smoothed-out coupling is used, and the stress–stress correlation function is calculated. A convolution integral expressed in terms of the modified Bessel function K{sub 0} is derived in order to obtain the shear modulus in approximation of interacting dipoles. Implications are demonstrated for the shear modulus near the melting transition which are due to the singularityless character of the dislocations. - Highlights: • Thermodynamics of dipoles of non-singular screw dislocations is studied below themore » melting. • The renormalization of the shear modulus is obtained for interacting dipoles. • Dependence of the shear modulus on the system scales is presented near the melting.« less
Three-Class Mammogram Classification Based on Descriptive CNN Features
Zhang, Qianni; Jadoon, Adeel
2017-01-01
In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques. PMID:28191461
Three-Class Mammogram Classification Based on Descriptive CNN Features.
Jadoon, M Mohsin; Zhang, Qianni; Haq, Ihsan Ul; Butt, Sharjeel; Jadoon, Adeel
2017-01-01
In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques.
Efficiency optimization of a fast Poisson solver in beam dynamics simulation
NASA Astrophysics Data System (ADS)
Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula
2016-01-01
Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.
Multi-Level Adaptive Techniques (MLAT) for singular-perturbation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1978-01-01
The multilevel (multigrid) adaptive technique, a general strategy of solving continuous problems by cycling between coarser and finer levels of discretization is described. It provides very fast general solvers, together with adaptive, nearly optimal discretization schemes. In the process, boundary layers are automatically either resolved or skipped, depending on a control function which expresses the computational goal. The global error decreases exponentially as a function of the overall computational work, in a uniform rate independent of the magnitude of the singular-perturbation terms. The key is high-order uniformly stable difference equations, and uniformly smoothing relaxation schemes.
Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akabani, G.; Hawkins, W.G.; Eckblade, M.B.
1999-01-01
The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transformmore » (CHT) algorithm.« less
Modeling of Graphene Planar Grating in the THz Range by the Method of Singular Integral Equations
NASA Astrophysics Data System (ADS)
Kaliberda, Mstislav E.; Lytvynenko, Leonid M.; Pogarsky, Sergey A.
2018-04-01
Diffraction of the H-polarized electromagnetic wave by the planar graphene grating in the THz range is considered. The scattering and absorption characteristics are studied. The scattered field is represented in the spectral domain via unknown spectral function. The mathematical model is based on the graphene surface impedance and the method of singular integral equations. The numerical solution is obtained by the Nystrom-type method of discrete singularities.
Integrable discrete PT symmetric model.
Ablowitz, Mark J; Musslimani, Ziad H
2014-09-01
An exactly solvable discrete PT invariant nonlinear Schrödinger-like model is introduced. It is an integrable Hamiltonian system that exhibits a nontrivial nonlinear PT symmetry. A discrete one-soliton solution is constructed using a left-right Riemann-Hilbert formulation. It is shown that this pure soliton exhibits unique features such as power oscillations and singularity formation. The proposed model can be viewed as a discretization of a recently obtained integrable nonlocal nonlinear Schrödinger equation.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.
Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong
2018-05-11
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN
Cheng, Gang; Chen, Xihui
2018-01-01
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671
Coulomb branches with complex singularities
NASA Astrophysics Data System (ADS)
Argyres, Philip C.; Martone, Mario
2018-06-01
We construct 4d superconformal field theories (SCFTs) whose Coulomb branches have singular complex structures. This implies, in particular, that their Coulomb branch coordinate rings are not freely generated. Our construction also gives examples of distinct SCFTs which have identical moduli space (Coulomb, Higgs, and mixed branch) geometries. These SCFTs thus provide an interesting arena in which to test the relationship between moduli space geometries and conformal field theory data. We construct these SCFTs by gauging certain discrete global symmetries of N = 4 superYang-Mills (sYM) theories. In the simplest cases, these discrete symmetries are outer automorphisms of the sYM gauge group, and so these theories have lagrangian descriptions as N = 4 sYM theories with disconnected gauge groups.
Variational Integration for Ideal Magnetohydrodynamics and Formation of Current Singularities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yao
Coronal heating has been a long-standing conundrum in solar physics. Parker's conjecture that spontaneous current singularities lead to nanoflares that heat the corona has been controversial. In ideal magnetohydrodynamics (MHD), can genuine current singularities emerge from a smooth 3D line-tied magnetic field? To numerically resolve this issue, the schemes employed must preserve magnetic topology exactly to avoid artificial reconnection in the presence of (nearly) singular current densities. Structure-preserving numerical methods are favorable for mitigating numerical dissipation, and variational integration is a powerful machinery for deriving them. However, successful applications of variational integration to ideal MHD have been scarce. In thismore » thesis, we develop variational integrators for ideal MHD in Lagrangian labeling by discretizing Newcomb's Lagrangian on a moving mesh using discretized exterior calculus. With the built-in frozen-in equation, the schemes are free of artificial reconnection, hence optimal for studying current singularity formation. Using this method, we first study a fundamental prototype problem in 2D, the Hahm-Kulsrud-Taylor (HKT) problem. It considers the effect of boundary perturbations on a 2D plasma magnetized by a sheared field, and its linear solution is singular. We find that with increasing resolution, the nonlinear solution converges to one with a current singularity. The same signature of current singularity is also identified in other 2D cases with more complex magnetic topologies, such as the coalescence instability of magnetic islands. We then extend the HKT problem to 3D line-tied geometry, which models the solar corona by anchoring the field lines in the boundaries. The effect of such geometry is crucial in the controversy over Parker's conjecture. The linear solution, which is singular in 2D, is found to be smooth. However, with finite amplitude, it can become pathological above a critical system length. The nonlinear solution turns out smooth for short systems. Nonetheless, the scaling of peak current density vs. system length suggests that the nonlinear solution may become singular at a finite length. With the results in hand, we cannot confirm or rule out this possibility conclusively, since we cannot obtain solutions with system lengths near the extrapolated critical value.« less
Discrete mathematical model of wave diffraction on pre-fractal impedance strips. TM mode case
NASA Astrophysics Data System (ADS)
Nesvit, K. V.
2013-10-01
In this paper a transverse magnetic (TM) wave diffraction problem on pre-fractal impedance strips is considered. The overall aim of this work is to develop a discrete mathematical model of the boundary integral equations (IEs) with the help of special quadrature formulas with the nodes in the zeros of Chebyshev polynomials and to perform a numerical experiments with the help of an efficient discrete singularities method (DSM).
Real-Time and Memory Correlation via Acousto-Optic Processing,
1978-06-01
acousto - optic technology as an answer to these requirements appears very attractive. Three fundamental signal-processing schemes using the acousto ... optic interaction have been investigated: (i) real-time correlation and convolution, (ii) Fourier and discrete Fourier transformation, and (iii
Analysis of a New Variational Model to Restore Point-Like and Curve-Like Singularities in Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aubert, Gilles, E-mail: gaubert@unice.fr; Blanc-Feraud, Laure, E-mail: Laure.Blanc-Feraud@inria.fr; Graziani, Daniele, E-mail: Daniele.Graziani@inria.fr
2013-02-15
The paper is concerned with the analysis of a new variational model to restore point-like and curve-like singularities in biological images. To this aim we investigate the variational properties of a suitable energy which governs these pathologies. Finally in order to realize numerical experiments we minimize, in the discrete setting, a regularized version of this functional by fast descent gradient scheme.
Theory and operational rules for the discrete Hankel transform.
Baddour, Natalie; Chouinard, Ugo
2015-04-01
Previous definitions of a discrete Hankel transform (DHT) have focused on methods to approximate the continuous Hankel integral transform. In this paper, we propose and evaluate the theory of a DHT that is shown to arise from a discretization scheme based on the theory of Fourier-Bessel expansions. The proposed transform also possesses requisite orthogonality properties which lead to invertibility of the transform. The standard set of shift, modulation, multiplication, and convolution rules are derived. In addition to the theory of the actual manipulated quantities which stand in their own right, this DHT can be used to approximate the continuous forward and inverse Hankel transform in the same manner that the discrete Fourier transform is known to be able to approximate the continuous Fourier transform.
A non-orthogonal decomposition of flows into discrete events
NASA Astrophysics Data System (ADS)
Boxx, Isaac; Lewalle, Jacques
1998-11-01
This work is based on the formula for the inverse Hermitian wavelet transform. A signal can be interpreted as a (non-unique) superposition of near-singular, partially overlapping events arising from Dirac functions and/or its derivatives combined with diffusion.( No dynamics implied: dimensionless diffusion is related to the definition of the analyzing wavelets.) These events correspond to local maxima of spectral energy density. We successfully fitted model events of various orders on a succession of fields, ranging from elementary signals to one-dimensional hot-wire traces. We document edge effects, event overlap and its implications on the algorithm. The interpretation of the discrete singularities as flow events (such as coherent structures) and the fundamental non-uniqueness of the decomposition are discussed. The dynamics of these events will be examined in the companion paper.
The Singularity Mystery Associated with a Radially Continuous Maxwell Viscoelastic Structure
NASA Technical Reports Server (NTRS)
Fang, Ming; Hager, Bradford H.
1995-01-01
The singularity problem associated with a radially continuous Maxwell viscoclastic structure is investigated. A special tool called the isolation function is developed. Results calculated using the isolation function show that the discrete model assumption is no longer valid when the viscoelastic parameter becomes a continuous function of radius. Continuous variations in the upper mantle viscoelastic parameter are especially powerful in destroying the mode-like structures. The contribution to the load Love numbers of the singularities is sensitive to the convexity of the viscoelastic parameter models. The difference between the vertical response and the horizontal response found in layered viscoelastic parameter models remains with continuous models.
NASA Astrophysics Data System (ADS)
Qi, Youzheng; Huang, Ling; Wu, Xin; Zhu, Wanhua; Fang, Guangyou; Yu, Gang
2017-07-01
Quantitative modeling of the transient electromagnetic (TEM) response requires consideration of the full transmitter waveform, i.e., not only the specific current waveform in a half cycle but also the bipolar repetition. In this paper, we present a novel temporal interpolation and convolution (TIC) method to facilitate the accurate TEM modeling. We first calculate the temporal basis response on a logarithmic scale using the fast digital-filter-based methods. Then, we introduce a function named hamlogsinc in the framework of discrete signal processing theory to reconstruct the basis function and to make the convolution with the positive half of the waveform. Finally, a superposition procedure is used to take account of the effect of previous bipolar waveforms. Comparisons with the established fast Fourier transform method demonstrate that our TIC method can get the same accuracy with a shorter computing time.
Fourier-Accelerated Nodal Solvers (FANS) for homogenization problems
NASA Astrophysics Data System (ADS)
Leuschner, Matthias; Fritzen, Felix
2017-11-01
Fourier-based homogenization schemes are useful to analyze heterogeneous microstructures represented by 2D or 3D image data. These iterative schemes involve discrete periodic convolutions with global ansatz functions (mostly fundamental solutions). The convolutions are efficiently computed using the fast Fourier transform. FANS operates on nodal variables on regular grids and converges to finite element solutions. Compared to established Fourier-based methods, the number of convolutions is reduced by FANS. Additionally, fast iterations are possible by assembling the stiffness matrix. Due to the related memory requirement, the method is best suited for medium-sized problems. A comparative study involving established Fourier-based homogenization schemes is conducted for a thermal benchmark problem with a closed-form solution. Detailed technical and algorithmic descriptions are given for all methods considered in the comparison. Furthermore, many numerical examples focusing on convergence properties for both thermal and mechanical problems, including also plasticity, are presented.
The Total Gaussian Class of Quasiprobabilities and its Relation to Squeezed-State Excitations
NASA Technical Reports Server (NTRS)
Wuensche, Alfred
1996-01-01
The class of quasiprobabilities obtainable from the Wigner quasiprobability by convolutions with the general class of Gaussian functions is investigated. It can be described by a three-dimensional, in general, complex vector parameter with the property of additivity when composing convolutions. The diagonal representation of this class of quasiprobabilities is connected with a generalization of the displaced Fock states in direction of squeezing. The subclass with real vector parameter is considered more in detail. It is related to the most important kinds of boson operator ordering. The properties of a specific set of discrete excitations of squeezed coherent states are given.
Numerical integration techniques for curved-element discretizations of molecule-solvent interfaces.
Bardhan, Jaydeep P; Altman, Michael D; Willis, David J; Lippow, Shaun M; Tidor, Bruce; White, Jacob K
2007-07-07
Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, here methods were developed to model several important surface formulations using exact surface discretizations. Following and refining Zauhar's work [J. Comput.-Aided Mol. Des. 9, 149 (1995)], two classes of curved elements were defined that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. Numerical integration techniques are presented that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, a set of calculations are presented that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planar-triangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute-solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that the methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online as supplemental material.
A pipeline design of a fast prime factor DFT on a finite field
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, In-Shek; Shao, H. M.; Reed, Irving S.; Shyu, Hsuen-Chyun
1988-01-01
A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghatak, Ananya, E-mail: gananya04@gmail.com; Mandal, Raka Dona Ray, E-mail: rakad.ray@gmail.com; Mandal, Bhabani Prasad, E-mail: bhabani.mandal@gmail.com
We complexify a 1-d potential V(x)=V{sub 0}cosh{sup 2}μ(tanh[(x−μd)/d]+tanh(μ)){sup 2} which exhibits bound, reflecting and free states to study various properties of a non-Hermitian system. This potential turns out a PT-symmetric non-Hermitian potential when one of the parameters (μ,d) becomes imaginary. For the case of μ→iμ, we have an entire real bound state spectrum. Explicit scattering states are constructed to show reciprocity at certain discrete values of energy even though the potential is not parity symmetric. Coexistence of deep energy minima of transmissivity with the multiple spectral singularities (MSS) is observed. We further show that this potential becomes invisible from themore » left (or right) at certain discrete energies. The penetrating states in the other case (d→id) are always reciprocal even though it is PT-invariant and no spectral singularity (SS) is present in this case. The presence of MSS and reflectionlessness is also discussed for the free states in the later case. -- Highlights: •Existence of multiple spectral singularities (MSS) in PT-symmetric non-Hermitian system is shown. •Reciprocity is restored at discrete positive energies even for parity non-invariant complex system. •Co-existence of MSS with deep energy minima of transitivity is obtained. •Possibilities of both unidirectional and bidirectional invisibility are explored for a non-Hermitian system. •Penetrating states are shown to be reciprocal for all energies for PT-symmetric system.« less
Burton-Miller-type singular boundary method for acoustic radiation and scattering
NASA Astrophysics Data System (ADS)
Fu, Zhuo-Jia; Chen, Wen; Gu, Yan
2014-08-01
This paper proposes the singular boundary method (SBM) in conjunction with Burton and Miller's formulation for acoustic radiation and scattering. The SBM is a strong-form collocation boundary discretization technique using the singular fundamental solutions, which is mathematically simple, easy-to-program, meshless and introduces the concept of source intensity factors (SIFs) to eliminate the singularities of the fundamental solutions. Therefore, it avoids singular numerical integrals in the boundary element method (BEM) and circumvents the troublesome placement of the fictitious boundary in the method of fundamental solutions (MFS). In the present method, we derive the SIFs of exterior Helmholtz equation by means of the SIFs of exterior Laplace equation owing to the same order of singularities between the Laplace and Helmholtz fundamental solutions. In conjunction with the Burton-Miller formulation, the SBM enhances the quality of the solution, particularly in the vicinity of the corresponding interior eigenfrequencies. Numerical illustrations demonstrate efficiency and accuracy of the present scheme on some benchmark examples under 2D and 3D unbounded domains in comparison with the analytical solutions, the boundary element solutions and Dirichlet-to-Neumann finite element solutions.
Modern CACSD using the Robust-Control Toolbox
NASA Technical Reports Server (NTRS)
Chiang, Richard Y.; Safonov, Michael G.
1989-01-01
The Robust-Control Toolbox is a collection of 40 M-files which extend the capability of PC/PRO-MATLAB to do modern multivariable robust control system design. Included are robust analysis tools like singular values and structured singular values, robust synthesis tools like continuous/discrete H(exp 2)/H infinity synthesis and Linear Quadratic Gaussian Loop Transfer Recovery methods and a variety of robust model reduction tools such as Hankel approximation, balanced truncation and balanced stochastic truncation, etc. The capabilities of the toolbox are described and illustated with examples to show how easily they can be used in practice. Examples include structured singular value analysis, H infinity loop-shaping and large space structure model reduction.
Sparse image reconstruction for molecular imaging.
Ting, Michael; Raich, Raviv; Hero, Alfred O
2009-06-01
The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.
NASA Astrophysics Data System (ADS)
Du, Kongchang; Zhao, Ying; Lei, Jiaqiang
2017-09-01
In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.
Regularization of the big bang singularity with random perturbations
NASA Astrophysics Data System (ADS)
Belbruno, Edward; Xue, BingKan
2018-03-01
We show how to regularize the big bang singularity in the presence of random perturbations modeled by Brownian motion using stochastic methods. We prove that the physical variables in a contracting universe dominated by a scalar field can be continuously and uniquely extended through the big bang as a function of time to an expanding universe only for a discrete set of values of the equation of state satisfying special co-prime number conditions. This result significantly generalizes a previous result (Xue and Belbruno 2014 Class. Quantum Grav. 31 165002) that did not model random perturbations. This result implies that the extension from a contracting to an expanding universe for the discrete set of co-prime equation of state is robust, which is a surprising result. Implications for a purely expanding universe are discussed, such as a non-smooth, randomly varying scale factor near the big bang.
NASA Astrophysics Data System (ADS)
Liu, Zhengguang; Li, Xiaoli
2018-05-01
In this article, we present a new second-order finite difference discrete scheme for a fractal mobile/immobile transport model based on equivalent transformative Caputo formulation. The new transformative formulation takes the singular kernel away to make the integral calculation more efficient. Furthermore, this definition is also effective where α is a positive integer. Besides, the T-Caputo derivative also helps us to increase the convergence rate of the discretization of the α-order(0 < α < 1) Caputo derivative from O(τ2-α) to O(τ3-α), where τ is the time step. For numerical analysis, a Crank-Nicolson finite difference scheme to solve the fractal mobile/immobile transport model is introduced and analyzed. The unconditional stability and a priori estimates of the scheme are given rigorously. Moreover, the applicability and accuracy of the scheme are demonstrated by numerical experiments to support our theoretical analysis.
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.
Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong
Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.
Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces
Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.
2012-01-01
Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that our methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online at http://web.mit.edu/tidor. PMID:17627358
Symmetry breaking and singularity structure in Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Commeford, K. A.; Garcia-March, M. A.; Ferrando, A.; Carr, Lincoln D.
2012-08-01
We determine the trajectories of vortex singularities that arise after a single vortex is broken by a discretely symmetric impulse in the context of Bose-Einstein condensates in a harmonic trap. The dynamics of these singularities are analyzed to determine the form of the imprinted motion. We find that the symmetry-breaking process introduces two effective forces: a repulsive harmonic force that causes the daughter trajectories to be ejected from the parent singularity and a Magnus force that introduces a torque about the axis of symmetry. For the analytical noninteracting case we find that the parent singularity is reconstructed from the daughter singularities after one period of the trapping frequency. The interactions between singularities in the weakly interacting system do not allow the parent vortex to be reconstructed. Analytic trajectories were compared to the actual minima of the wave function, showing less than 0.5% error for an impulse strength of v=0.00005. We show that these solutions are valid within the impulse regime for various impulse strengths using numerical integration of the Gross-Pitaevskii equation. We also show that the actual duration of the symmetry-breaking potential does not significantly change the dynamics of the system as long as the strength is below v=0.0005.
FREQ: A computational package for multivariable system loop-shaping procedures
NASA Technical Reports Server (NTRS)
Giesy, Daniel P.; Armstrong, Ernest S.
1989-01-01
Many approaches in the field of linear, multivariable time-invariant systems analysis and controller synthesis employ loop-sharing procedures wherein design parameters are chosen to shape frequency-response singular value plots of selected transfer matrices. A software package, FREQ, is documented for computing within on unified framework many of the most used multivariable transfer matrices for both continuous and discrete systems. The matrices are evaluated at user-selected frequency-response values, and singular values against frequency. Example computations are presented to demonstrate the use of the FREQ code.
Resonances in Coupled π K - η K Scattering from Quantum Chromodynamics
Dudek, Jozef J.; Edwards, Robert G.; Thomas, Christopher E.; ...
2014-10-01
Using first-principles calculation within Quantum Chromodynamics, we are able to reproduce the pattern of experimental strange resonances which appear as complex singularities within coupled πK, ηK scattering amplitudes. We make use of numerical computation within the lattice discretized approach to QCD, extracting the energy dependence of scattering amplitudes through their relation- ship to the discrete spectrum of the theory in a finite-volume, which we map out in unprecedented detail.
Isogeometric Divergence-conforming B-splines for the Darcy-Stokes-Brinkman Equations
2012-01-01
dimensionality ofQ0,h using T-splines [5]. However, a proof of mesh-independent discrete stability remains absent with this choice of pressure space ... the boundary ∂K +/− of element K+/−. With the above notation established, let us define the following bilinear form: a ∗h(w,v) = np∑ i=1 ( (2ν∇sw,∇sv...8.3 Two- Dimensional Problem with a Singular Solution To examine how our discretization performs in
Invariant object recognition based on the generalized discrete radon transform
NASA Astrophysics Data System (ADS)
Easley, Glenn R.; Colonna, Flavia
2004-04-01
We introduce a method for classifying objects based on special cases of the generalized discrete Radon transform. We adjust the transform and the corresponding ridgelet transform by means of circular shifting and a singular value decomposition (SVD) to obtain a translation, rotation and scaling invariant set of feature vectors. We then use a back-propagation neural network to classify the input feature vectors. We conclude with experimental results and compare these with other invariant recognition methods.
Calculations of axisymmetric vortex sheet roll-up using a panel and a filament model
NASA Technical Reports Server (NTRS)
Kantelis, J. P.; Widnall, S. E.
1986-01-01
A method for calculating the self-induced motion of a vortex sheet using discrete vortex elements is presented. Vortex panels and vortex filaments are used to simulate two-dimensional and axisymmetric vortex sheet roll-up. A straight forward application using vortex elements to simulate the motion of a disk of vorticity with an elliptic circulation distribution yields unsatisfactroy results where the vortex elements move in a chaotic manner. The difficulty is assumed to be due to the inability of a finite number of discrete vortex elements to model the singularity at the sheet edge and due to large velocity calculation errors which result from uneven sheet stretching. A model of the inner portion of the spiral is introduced to eliminate the difficulty with the sheet edge singularity. The model replaces the outermost portion of the sheet with a single vortex of equivalent circulation and a number of higher order terms which account for the asymmetry of the spiral. The resulting discrete vortex model is applied to both two-dimensional and axisymmetric sheets. The two-dimensional roll-up is compared to the solution for a semi-infinite sheet with good results.
NASA Astrophysics Data System (ADS)
Labunets, Valeri G.; Labunets-Rundblad, Ekaterina V.; Astola, Jaakko T.
2001-12-01
Fast algorithms for a wide class of non-separable n-dimensional (nD) discrete unitary K-transforms (DKT) are introduced. They need less 1D DKTs than in the case of the classical radix-2 FFT-type approach. The method utilizes a decomposition of the nD K-transform into the product of a new nD discrete Radon transform and of a set of parallel/independ 1D K-transforms. If the nD K-transform has a separable kernel (e.g., the case of the discrete Fourier transform) our approach leads to decrease of multiplicative complexity by the factor of n comparing to the classical row/column separable approach. It is well known that an n-th order Volterra filter of one dimensional signal can be evaluated by an appropriate nD linear convolution. This work describes new superfast algorithm for Volterra filtering. New approach is based on the superfast discrete Radon and Nussbaumer polynomial transforms.
Singular unlocking transition in the Winfree model of coupled oscillators.
Quinn, D Dane; Rand, Richard H; Strogatz, Steven H
2007-03-01
The Winfree model consists of a population of globally coupled phase oscillators with randomly distributed natural frequencies. As the coupling strength and the spread of natural frequencies are varied, the various stable states of the model can undergo bifurcations, nearly all of which have been characterized previously. The one exception is the unlocking transition, in which the frequency-locked state disappears abruptly as the spread of natural frequencies exceeds a critical width. Viewed as a function of the coupling strength, this critical width defines a bifurcation curve in parameter space. For the special case where the frequency distribution is uniform, earlier work had uncovered a puzzling singularity in this bifurcation curve. Here we seek to understand what causes the singularity. Using the Poincaré-Lindstedt method of perturbation theory, we analyze the locked state and its associated unlocking transition, first for an arbitrary distribution of natural frequencies, and then for discrete systems of N oscillators. We confirm that the bifurcation curve becomes singular for a continuum uniform distribution, yet find that it remains well behaved for any finite N , suggesting that the continuum limit is responsible for the singularity.
Klaseboer, Evert; Sepehrirahnama, Shahrokh; Chan, Derek Y C
2017-08-01
The general space-time evolution of the scattering of an incident acoustic plane wave pulse by an arbitrary configuration of targets is treated by employing a recently developed non-singular boundary integral method to solve the Helmholtz equation in the frequency domain from which the space-time solution of the wave equation is obtained using the fast Fourier transform. The non-singular boundary integral solution can enforce the radiation boundary condition at infinity exactly and can account for multiple scattering effects at all spacings between scatterers without adverse effects on the numerical precision. More generally, the absence of singular kernels in the non-singular integral equation confers high numerical stability and precision for smaller numbers of degrees of freedom. The use of fast Fourier transform to obtain the time dependence is not constrained to discrete time steps and is particularly efficient for studying the response to different incident pulses by the same configuration of scatterers. The precision that can be attained using a smaller number of Fourier components is also quantified.
Discretizing singular point sources in hyperbolic wave propagation problems
Petersson, N. Anders; O'Reilly, Ossian; Sjogreen, Bjorn; ...
2016-06-01
Here, we develop high order accurate source discretizations for hyperbolic wave propagation problems in first order formulation that are discretized by finite difference schemes. By studying the Fourier series expansions of the source discretization and the finite difference operator, we derive sufficient conditions for achieving design accuracy in the numerical solution. Only half of the conditions in Fourier space can be satisfied through moment conditions on the source discretization, and we develop smoothness conditions for satisfying the remaining accuracy conditions. The resulting source discretization has compact support in physical space, and is spread over as many grid points as themore » number of moment and smoothness conditions. In numerical experiments we demonstrate high order of accuracy in the numerical solution of the 1-D advection equation (both in the interior and near a boundary), the 3-D elastic wave equation, and the 3-D linearized Euler equations.« less
Revised Thomas-Fermi approximation for singular potentials
NASA Astrophysics Data System (ADS)
Dufty, James W.; Trickey, S. B.
2016-08-01
Approximations for the many-fermion free-energy density functional that include the Thomas-Fermi (TF) form for the noninteracting part lead to singular densities for singular external potentials (e.g., attractive Coulomb). This limitation of the TF approximation is addressed here by a formal map of the exact Euler equation for the density onto an equivalent TF form characterized by a modified Kohn-Sham potential. It is shown to be a "regularized" version of the Kohn-Sham potential, tempered by convolution with a finite-temperature response function. The resulting density is nonsingular, with the equilibrium properties obtained from the total free-energy functional evaluated at this density. This new representation is formally exact. Approximate expressions for the regularized potential are given to leading order in a nonlocality parameter, and the limiting behavior at high and low temperatures is described. The noninteracting part of the free energy in this approximation is the usual Thomas-Fermi functional. These results generalize and extend to finite temperatures the ground-state regularization by R. G. Parr and S. Ghosh [Proc. Natl. Acad. Sci. U.S.A. 83, 3577 (1986), 10.1073/pnas.83.11.3577] and by L. R. Pratt, G. G. Hoffman, and R. A. Harris [J. Chem. Phys. 88, 1818 (1988), 10.1063/1.454105] and formally systematize the finite-temperature regularization given by the latter authors.
Calculating corner singularities by boundary integral equations.
Shi, Hualiang; Lu, Ya Yan; Du, Qiang
2017-06-01
Accurate numerical solutions for electromagnetic fields near sharp corners and edges are important for nanophotonics applications that rely on strong near fields to enhance light-matter interactions. For cylindrical structures, the singularity exponents of electromagnetic fields near sharp edges can be solved analytically, but in general the actual fields can only be calculated numerically. In this paper, we use a boundary integral equation method to compute electromagnetic fields near sharp edges, and construct the leading terms in asymptotic expansions based on numerical solutions. Our integral equations are formulated for rescaled unknown functions to avoid unbounded field components, and are discretized with a graded mesh and properly chosen quadrature schemes. The numerically found singularity exponents agree well with the exact values in all the test cases presented here, indicating that the numerical solutions are accurate.
NASA Astrophysics Data System (ADS)
Jia, Zhongxiao; Yang, Yanfei
2018-05-01
In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).
Segmentation of discrete vector fields.
Li, Hongyu; Chen, Wenbin; Shen, I-Fan
2006-01-01
In this paper, we propose an approach for 2D discrete vector field segmentation based on the Green function and normalized cut. The method is inspired by discrete Hodge Decomposition such that a discrete vector field can be broken down into three simpler components, namely, curl-free, divergence-free, and harmonic components. We show that the Green Function Method (GFM) can be used to approximate the curl-free and the divergence-free components to achieve our goal of the vector field segmentation. The final segmentation curves that represent the boundaries of the influence region of singularities are obtained from the optimal vector field segmentations. These curves are composed of piecewise smooth contours or streamlines. Our method is applicable to both linear and nonlinear discrete vector fields. Experiments show that the segmentations obtained using our approach essentially agree with human perceptual judgement.
A VLSI pipeline design of a fast prime factor DFT on a finite field
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Shao, H. M.; Reed, I. S.; Shyu, H. C.
1986-01-01
A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). A pipeline structure is used to implement this prime factor DFT over GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.
Cotton fibre cross-section properties
USDA-ARS?s Scientific Manuscript database
From a structural perspective the cotton fibre is a singularly discrete, elongated plant cell with no junctions or inter-cellular boundaries. Its form in nature is essentially unadulterated from the field to the spinning mill where its cross-section properties, as for any textile fibre, are central ...
Multi-Regge kinematics and the moduli space of Riemann spheres with marked points
Del Duca, Vittorio; Druc, Stefan; Drummond, James; ...
2016-08-25
We show that scattering amplitudes in planar N = 4 Super Yang-Mills in multi-Regge kinematics can naturally be expressed in terms of single-valued iterated integrals on the moduli space of Riemann spheres with marked points. As a consequence, scattering amplitudes in this limit can be expressed as convolutions that can easily be computed using Stokes’ theorem. We apply this framework to MHV amplitudes to leading-logarithmic accuracy (LLA), and we prove that at L loops all MHV amplitudes are determined by amplitudes with up to L + 4 external legs. We also investigate non-MHV amplitudes, and we show that they canmore » be obtained by convoluting the MHV results with a certain helicity flip kernel. We classify all leading singularities that appear at LLA in the Regge limit for arbitrary helicity configurations and any number of external legs. In conclusion, we use our new framework to obtain explicit analytic results at LLA for all MHV amplitudes up to five loops and all non-MHV amplitudes with up to eight external legs and four loops.« less
Digital high speed programmable convolver
NASA Astrophysics Data System (ADS)
Rearick, T. C.
1984-12-01
A circuit module for rapidly calculating a discrete numerical convolution is described. A convolution such as finding the sum of the products of a 16 bit constant and a 16 bit variable is performed by a module which is programmable so that the constant may be changed for a new problem. In addition, the module may be programmed to find the sum of the products of 4 and 8 bit constants and variables. RAM (Random Access Memories) are loaded with partial products of the selected constant and all possible variables. Then, when the actual variable is loaded, it acts as an address to find the correct partial product in the particular RAM. The partial products from all of the RAMs are shifted to the appropriate numerical power position (if necessary) and then added in adder elements.
Rényi and Tsallis formulations of separability conditions in finite dimensions
NASA Astrophysics Data System (ADS)
Rastegin, Alexey E.
2017-12-01
Separability conditions for a bipartite quantum system of finite-dimensional subsystems are formulated in terms of Rényi and Tsallis entropies. Entropic uncertainty relations often lead to entanglement criteria. We propose new approach based on the convolution of discrete probability distributions. Measurements on a total system are constructed of local ones according to the convolution scheme. Separability conditions are derived on the base of uncertainty relations of the Maassen-Uffink type as well as majorization relations. On each of subsystems, we use a pair of sets of subnormalized vectors that form rank-one POVMs. We also obtain entropic separability conditions for local measurements with a special structure, such as mutually unbiased bases and symmetric informationally complete measurements. The relevance of the derived separability conditions is demonstrated with several examples.
Maher, Hadir M; Ragab, Marwa A A; El-Kimary, Eman I
2015-01-01
Methotrexate (MTX) is widely used to treat rheumatoid arthritis (RA), mostly along with non-steroidal anti-inflammatory drugs (NSAIDs), the most common of which is aspirin or acetyl salicylic acid (ASA). Since NSAIDs impair MTX clearance and increase its toxicity, it was necessary to develop a simple and reliable method for the monitoring of MTX levels in urine samples, when coadministered with ASA. The method was based on the spectrofluorimetric measurement of the acid-induced hydrolysis product of MTX, 4-amino-4-deoxy-10-methylpteroic acid (AMP), along with the strongly fluorescent salicylic acid (SA), a product of acid-induced hydrolysis of aspirin and its metabolites in urine. The overlapping emission spectra were resolved using the derivative method (D method). In addition, the corresponding derivative emission spectra were convoluted using discrete Fourier functions, 8-points sin xi polynomials, (D/FF method) for better elimination of interferences. Validation of the developed methods was carried out according to the ICH guidelines. Moreover, the data obtained using derivative and convoluted derivative spectra were treated using the non-parametric Theil's method (NP), compared with the least-squares parametric regression method (LSP). The results treated with Theil's method were more accurate and precise compared with LSP since the former is less affected by the outliers. This work offers the potential of both derivative and convolution using discrete Fourier functions in addition to the effectiveness of using the NP regression analysis of data. The high sensitivity obtained by the proposed methods was promising for measuring low concentration levels of the two drugs in urine samples. These methods were efficiently used to measure the drugs in human urine samples following their co-administration.
An optimization-based framework for anisotropic simplex mesh adaptation
NASA Astrophysics Data System (ADS)
Yano, Masayuki; Darmofal, David L.
2012-09-01
We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.
Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering
NASA Technical Reports Server (NTRS)
Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)
2001-01-01
Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.
High order discretization techniques for real-space ab initio simulations
NASA Astrophysics Data System (ADS)
Anderson, Christopher R.
2018-03-01
In this paper, we present discretization techniques to address numerical problems that arise when constructing ab initio approximations that use real-space computational grids. We present techniques to accommodate the singular nature of idealized nuclear and idealized electronic potentials, and we demonstrate the utility of using high order accurate grid based approximations to Poisson's equation in unbounded domains. To demonstrate the accuracy of these techniques, we present results for a Full Configuration Interaction computation of the dissociation of H2 using a computed, configuration dependent, orbital basis set.
A FFT-based formulation for discrete dislocation dynamics in heterogeneous media
NASA Astrophysics Data System (ADS)
Bertin, N.; Capolungo, L.
2018-02-01
In this paper, an extension of the DDD-FFT approach presented in [1] is developed for heterogeneous elasticity. For such a purpose, an iterative spectral formulation in which convolutions are calculated in the Fourier space is developed to solve for the mechanical state associated with the discrete eigenstrain-based microstructural representation. With this, the heterogeneous DDD-FFT approach is capable of treating anisotropic and heterogeneous elasticity in a computationally efficient manner. In addition, a GPU implementation is presented to allow for further acceleration. As a first example, the approach is used to investigate the interaction between dislocations and second-phase particles, thereby demonstrating its ability to inherently incorporate image forces arising from elastic inhomogeneities.
Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.
Abbasi, Mahdi
2014-01-01
Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.
NASA Astrophysics Data System (ADS)
Allman, Derek; Reiter, Austin; Bell, Muyinatu
2018-02-01
We previously proposed a method of removing reflection artifacts in photoacoustic images that uses deep learning. Our approach generally relies on using simulated photoacoustic channel data to train a convolutional neural network (CNN) that is capable of distinguishing sources from artifacts based on unique differences in their spatial impulse responses (manifested as depth-based differences in wavefront shapes). In this paper, we directly compare a CNN trained with our previous continuous transducer model to a CNN trained with an updated discrete acoustic receiver model that more closely matches an experimental ultrasound transducer. These two CNNs were trained with simulated data and tested on experimental data. The CNN trained using the continuous receiver model correctly classified 100% of sources and 70.3% of artifacts in the experimental data. In contrast, the CNN trained using the discrete receiver model correctly classified 100% of sources and 89.7% of artifacts in the experimental images. The 19.4% increase in artifact classification accuracy indicates that an acoustic receiver model that closely mimics the experimental transducer plays an important role in improving the classification of artifacts in experimental photoacoustic data. Results are promising for developing a method to display CNN-based images that remove artifacts in addition to only displaying network-identified sources as previously proposed.
A new class of asymptotically non-chaotic vacuum singularities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klinger, Paul, E-mail: paul.klinger@univie.ac.at
2015-12-15
The BKL conjecture, stated in the 1960s and early 1970s by Belinski, Khalatnikov and Lifschitz, proposes a detailed description of the generic asymptotic dynamics of spacetimes as they approach a spacelike singularity. It predicts complicated chaotic behaviour in the generic case, but simpler non-chaotic one in cases with symmetry assumptions or certain kinds of matter fields. Here we construct a new class of four-dimensional vacuum spacetimes containing spacelike singularities which show non-chaotic behaviour. In contrast with previous constructions, no symmetry assumptions are made. Rather, the metric is decomposed in Iwasawa variables and conditions on the asymptotic evolution of some ofmore » them are imposed. The constructed solutions contain five free functions of all space coordinates, two of which are constrained by inequalities. We investigate continuous and discrete isometries and compare the solutions to previous constructions. Finally, we give the asymptotic behaviour of the metric components and curvature.« less
Vortex equations: Singularities, numerical solution, and axisymmetric vortex breakdown
NASA Technical Reports Server (NTRS)
Bossel, H. H.
1972-01-01
A method of weighted residuals for the computation of rotationally symmetric quasi-cylindrical viscous incompressible vortex flow is presented and used to compute a wide variety of vortex flows. The method approximates the axial velocity and circulation profiles by series of exponentials having (N + 1) and N free parameters, respectively. Formal integration results in a set of (2N + 1) ordinary differential equations for the free parameters. The governing equations are shown to have an infinite number of discrete singularities corresponding to critical values of the swirl parameters. The computations point to the controlling influence of the inner core flow on vortex behavior. They also confirm the existence of two particular critical swirl parameter values: one separates vortex flow which decays smoothly from vortex flow which eventually breaks down, and the second is the first singularity of the quasi-cylindrical system, at which point physical vortex breakdown is thought to occur.
NASA Astrophysics Data System (ADS)
Hu, Hwai-Tsu; Chou, Hsien-Hsin; Yu, Chu; Hsu, Ling-Yuan
2014-12-01
This paper presents a novel approach for blind audio watermarking. The proposed scheme utilizes the flexibility of discrete wavelet packet transformation (DWPT) to approximate the critical bands and adaptively determines suitable embedding strengths for carrying out quantization index modulation (QIM). The singular value decomposition (SVD) is employed to analyze the matrix formed by the DWPT coefficients and embed watermark bits by manipulating singular values subject to perceptual criteria. To achieve even better performance, two auxiliary enhancement measures are attached to the developed scheme. Performance evaluation and comparison are demonstrated with the presence of common digital signal processing attacks. Experimental results confirm that the combination of the DWPT, SVD, and adaptive QIM achieves imperceptible data hiding with satisfying robustness and payload capacity. Moreover, the inclusion of self-synchronization capability allows the developed watermarking system to withstand time-shifting and cropping attacks.
Cuckoo search algorithm based satellite image contrast and brightness enhancement using DWT-SVD.
Bhandari, A K; Soni, V; Kumar, A; Singh, G K
2014-07-01
This paper presents a new contrast enhancement approach which is based on Cuckoo Search (CS) algorithm and DWT-SVD for quality improvement of the low contrast satellite images. The input image is decomposed into the four frequency subbands through Discrete Wavelet Transform (DWT), and CS algorithm used to optimize each subband of DWT and then obtains the singular value matrix of the low-low thresholded subband image and finally, it reconstructs the enhanced image by applying IDWT. The singular value matrix employed intensity information of the particular image, and any modification in the singular values changes the intensity of the given image. The experimental results show superiority of the proposed method performance in terms of PSNR, MSE, Mean and Standard Deviation over conventional and state-of-the-art techniques. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Shu, J. Y.
1983-01-01
Two different singularity methods have been utilized to calculate the potential flow past a three dimensional non-lifting body. Two separate FORTRAN computer programs have been developed to implement these theoretical models, which will in the future allow inclusion of the fuselage effect in a pair of existing subcritical wing design computer programs. The first method uses higher order axial singularity distributions to model axisymmetric bodies of revolution in an either axial or inclined uniform potential flow. Use of inset of the singularity line away from the body for blunt noses, and cosine-type element distributions have been applied to obtain the optimal results. Excellent agreement to five significant figures with the exact solution pressure coefficient value has been found for a series of ellipsoids at different angles of attack. Solutions obtained for other axisymmetric bodies compare well with available experimental data. The second method utilizes distributions of singularities on the body surface, in the form of a discrete vortex lattice. This program is capable of modeling arbitrary three dimensional non-lifting bodies. Much effort has been devoted to finding the optimal method of calculating the tangential velocity on the body surface, extending techniques previously developed by other workers.
Optical chirp z-transform processor with a simplified architecture.
Ngo, Nam Quoc
2014-12-29
Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.
Discrete maximal regularity of time-stepping schemes for fractional evolution equations.
Jin, Bangti; Li, Buyang; Zhou, Zhi
2018-01-01
In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.
Graph-cut based discrete-valued image reconstruction.
Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim
2015-05-01
Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.
A Varifold Approach to Surface Approximation
NASA Astrophysics Data System (ADS)
Buet, Blanche; Leonardi, Gian Paolo; Masnou, Simon
2017-11-01
We show that the theory of varifolds can be suitably enriched to open the way to applications in the field of discrete and computational geometry. Using appropriate regularizations of the mass and of the first variation of a varifold we introduce the notion of approximate mean curvature and show various convergence results that hold, in particular, for sequences of discrete varifolds associated with point clouds or pixel/voxel-type discretizations of d-surfaces in the Euclidean n-space, without restrictions on dimension and codimension. The variational nature of the approach also allows us to consider surfaces with singularities, and in that case the approximate mean curvature is consistent with the generalized mean curvature of the limit surface. A series of numerical tests are provided in order to illustrate the effectiveness and generality of the method.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bran R. (Technical Monitor)
2002-01-01
We present high-order semi-discrete central-upwind numerical schemes for approximating solutions of multi-dimensional Hamilton-Jacobi (HJ) equations. This scheme is based on the use of fifth-order central interpolants like those developed in [1], in fluxes presented in [3]. These interpolants use the weighted essentially nonoscillatory (WENO) approach to avoid spurious oscillations near singularities, and become "central-upwind" in the semi-discrete limit. This scheme provides numerical approximations whose error is as much as an order of magnitude smaller than those in previous WENO-based fifth-order methods [2, 1]. Thee results are discussed via examples in one, two and three dimensions. We also pregnant explicit N-dimensional formulas for the fluxes, discuss their monotonicity and tl!e connection between this method and that in [2].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berzi, Diego; Vescovi, Dalila
2015-01-15
We use previous results from discrete element simulations of simple shear flows of rigid, identical spheres in the collisional regime to show that the volume fraction-dependence of the stresses is singular at the shear rigidity. Here, we identify the shear rigidity, which is a decreasing function of the interparticle friction, as the maximum volume fraction beyond which a random collisional assembly of grains cannot be sheared without developing force chains that span the entire domain. In the framework of extended kinetic theory, i.e., kinetic theory that accounts for the decreasing in the collisional dissipation due to the breaking of molecularmore » chaos at volume fractions larger than 0.49, we also show that the volume fraction-dependence of the correlation length (measure of the velocity correlation) is singular at random close packing, independent of the interparticle friction. The difference in the singularities ensures that the ratio of the shear stress to the pressure at shear rigidity is different from zero even in the case of frictionless spheres: we identify that with the yield stress ratio of granular materials, and we show that the theoretical predictions, once the different singularities are inserted into the functions of extended kinetic theory, are in excellent agreement with the results of numerical simulations.« less
Inverting dedevelopment: geometric singularity theory in embryology
NASA Astrophysics Data System (ADS)
Bookstein, Fred L.; Smith, Bradley R.
2000-10-01
The diffeomorphism model so useful in the biomathematics of normal morphological variability and disease is inappropriate for applications in embryogenesis, where whole coordinate patches are created out of single points. For this application we need a suitable algebra for the creation of something from nothing in a carefully organized geometry: a formalism for parameterizing discrete nondifferentiabilities of invertible functions on Rk, k $GTR 1. One easy way to begin is via the inverse of the development map - call it the dedevelopment map, the deformation backwards in time. Extrapolated, this map will inevitably have singularities at which its derivative is zero. When the dedevelopment map is inverted to face forward in time, the singularities become appropriately isolated infinities of derivative. We have recently introduced growth visualizations via extrapolations to the isolated singularities at which only one directional derivative is zero. Maps inverse to these create new coordinate patches directionally rather than radically. The most generic singularity that suits this purpose is the crease f(x,y) equals (x,x2y+y3), which has already been applied in morphometrics for the description of focal morphogenetic phenomena. We apply it to embryogenesis in the form of its analytic inverse, and demonstrate its power using a priceless new data set of mouse embryos imaged in 3D by micro-MR with voxels smaller than 100micrometers 3.
Chaotic attractors of relaxation oscillators
NASA Astrophysics Data System (ADS)
Guckenheimer, John; Wechselberger, Martin; Young, Lai-Sang
2006-03-01
We develop a general technique for proving the existence of chaotic attractors for three-dimensional vector fields with two time scales. Our results connect two important areas of dynamical systems: the theory of chaotic attractors for discrete two-dimensional Henon-like maps and geometric singular perturbation theory. Two-dimensional Henon-like maps are diffeomorphisms that limit on non-invertible one-dimensional maps. Wang and Young formulated hypotheses that suffice to prove the existence of chaotic attractors in these families. Three-dimensional singularly perturbed vector fields have return maps that are also two-dimensional diffeomorphisms limiting on one-dimensional maps. We describe a generic mechanism that produces folds in these return maps and demonstrate that the Wang-Young hypotheses are satisfied. Our analysis requires a careful study of the convergence of the return maps to their singular limits in the Ck topology for k >= 3. The theoretical results are illustrated with a numerical study of a variant of the forced van der Pol oscillator.
An extended 3D discrete-continuous model and its application on single- and bi-crystal micropillars
NASA Astrophysics Data System (ADS)
Huang, Minsheng; Liang, Shuang; Li, Zhenhuan
2017-04-01
A 3D discrete-continuous model (3D DCM), which couples the 3D discrete dislocation dynamics (3D DDD) and finite element method (FEM), is extended in this study. New schemes for two key information transfers between DDD and FEM, i.e. plastic-strain distribution from DDD to FEM and stress transfer from FEM to DDD, are suggested. The plastic strain induced by moving dislocation segments is distributed to an elementary spheroid (ellipsoid or sphere) via a specific new distribution function. The influence of various interfaces (such as free surfaces and grain boundaries (GBs)) on the plastic-strain distribution is specially considered. By these treatments, the deformation fields can be solved accurately even for dislocations on slip planes severely inclined to the FE mesh, with no spurious stress concentration points produced. In addition, a stress correction by singular and non-singular theoretical solutions within a cut-off sphere is introduced to calculate the stress on the dislocations accurately. By these schemes, the present DCM becomes less sensitive to the FE mesh and more numerically efficient, which can also consider the interaction between neighboring dislocations appropriately even though they reside in the same FE mesh. Furthermore, the present DCM has been employed to model the compression of single-crystal and bi-crystal micropillars with rigid and dislocation-absorbed GBs. The influence of internal GB on the jerky stress-strain response and deformation mode is studied in detail to shed more light on these important micro-plastic problems.
Linear stability analysis of collective neutrino oscillations without spurious modes
NASA Astrophysics Data System (ADS)
Morinaga, Taiki; Yamada, Shoichi
2018-01-01
Collective neutrino oscillations are induced by the presence of neutrinos themselves. As such, they are intrinsically nonlinear phenomena and are much more complex than linear counterparts such as the vacuum or Mikheyev-Smirnov-Wolfenstein oscillations. They obey integro-differential equations, for which it is also very challenging to obtain numerical solutions. If one focuses on the onset of collective oscillations, on the other hand, the equations can be linearized and the technique of linear analysis can be employed. Unfortunately, however, it is well known that such an analysis, when applied with discretizations of continuous angular distributions, suffers from the appearance of so-called spurious modes: unphysical eigenmodes of the discretized linear equations. In this paper, we analyze in detail the origin of these unphysical modes and present a simple solution to this annoying problem. We find that the spurious modes originate from the artificial production of pole singularities instead of a branch cut on the Riemann surface by the discretizations. The branching point singularities on the Riemann surface for the original nondiscretized equations can be recovered by approximating the angular distributions with polynomials and then performing the integrals analytically. We demonstrate for some examples that this simple prescription does remove the spurious modes. We also propose an even simpler method: a piecewise linear approximation to the angular distribution. It is shown that the same methodology is applicable to the multienergy case as well as to the dispersion relation approach that was proposed very recently.
Algebraic signal processing theory: 2-D spatial hexagonal lattice.
Pünschel, Markus; Rötteler, Martin
2007-06-01
We develop the framework for signal processing on a spatial, or undirected, 2-D hexagonal lattice for both an infinite and a finite array of signal samples. This framework includes the proper notions of z-transform, boundary conditions, filtering or convolution, spectrum, frequency response, and Fourier transform. In the finite case, the Fourier transform is called discrete triangle transform. Like the hexagonal lattice, this transform is nonseparable. The derivation of the framework makes it a natural extension of the algebraic signal processing theory that we recently introduced. Namely, we construct the proper signal models, given by polynomial algebras, bottom-up from a suitable definition of hexagonal space shifts using a procedure provided by the algebraic theory. These signal models, in turn, then provide all the basic signal processing concepts. The framework developed in this paper is related to Mersereau's early work on hexagonal lattices in the same way as the discrete cosine and sine transforms are related to the discrete Fourier transform-a fact that will be made rigorous in this paper.
NASA Astrophysics Data System (ADS)
Albaba, Adel; Lambert, Stéphane; Faug, Thierry
2018-05-01
The present paper investigates the mean impact force exerted by a granular mass flowing down an incline and impacting a rigid wall of semi-infinite height. First, this granular flow-wall interaction problem is modeled by numerical simulations based on the discrete element method (DEM). These DEM simulations allow computing the depth-averaged quantities—thickness, velocity, and density—of the incoming flow and the resulting mean force on the rigid wall. Second, that problem is described by a simple analytic solution based on a depth-averaged approach for a traveling compressible shock wave, whose volume is assumed to shrink into a singular surface, and which coexists with a dead zone. It is shown that the dead-zone dynamics and the mean force on the wall computed from DEM can be reproduced reasonably well by the analytic solution proposed over a wide range of slope angle of the incline. These results are obtained by feeding the analytic solution with the thickness, the depth-averaged velocity, and the density averaged over a certain distance along the incline rather than flow quantities taken at a singular section before the jump, thus showing that the assumption of a shock wave volume shrinking into a singular surface is questionable. The finite length of the traveling wave upstream of the grains piling against the wall must be considered. The sensitivity of the model prediction to that sampling length remains complicated, however, which highlights the need of further investigation about the properties and the internal structure of the propagating granular wave.
Canonical quantization of general relativity in discrete space-times.
Gambini, Rodolfo; Pullin, Jorge
2003-01-17
It has long been recognized that lattice gauge theory formulations, when applied to general relativity, conflict with the invariance of the theory under diffeomorphisms. We analyze discrete lattice general relativity and develop a canonical formalism that allows one to treat constrained theories in Lorentzian signature space-times. The presence of the lattice introduces a "dynamical gauge" fixing that makes the quantization of the theories conceptually clear, albeit computationally involved. The problem of a consistent algebra of constraints is automatically solved in our approach. The approach works successfully in other field theories as well, including topological theories. A simple cosmological application exhibits quantum elimination of the singularity at the big bang.
NASA Astrophysics Data System (ADS)
Kamiya, Ryo; Kanki, Masataka; Mase, Takafumi; Tokihiro, Tetsuji
2017-01-01
We introduce a so-called coprimeness-preserving non-integrable extension to the two-dimensional Toda lattice equation. We believe that this equation is the first example of such discrete equations defined over a three-dimensional lattice. We prove that all the iterates of the equation are irreducible Laurent polynomials of the initial data and that every pair of two iterates is co-prime, which indicate confined singularities of the equation. By reducing the equation to two- or one-dimensional lattices, we obtain coprimeness-preserving non-integrable extensions to the one-dimensional Toda lattice equation and the Somos-4 recurrence.
Discrete transparent boundary conditions for the mixed KDV-BBM equation
NASA Astrophysics Data System (ADS)
Besse, Christophe; Noble, Pascal; Sanchez, David
2017-09-01
In this paper, we consider artificial boundary conditions for the linearized mixed Korteweg-de Vries (KDV) and Benjamin-Bona-Mahoney (BBM) equation which models water waves in the small amplitude, large wavelength regime. Continuous (respectively discrete) artificial boundary conditions involve non local operators in time which in turn requires to compute time convolutions and invert the Laplace transform of an analytic function (respectively the Z-transform of an holomorphic function). In this paper, we propose a new, stable and fairly general strategy to carry out this crucial step in the design of transparent boundary conditions. For large time simulations, we also introduce a methodology based on the asymptotic expansion of coefficients involved in exact direct transparent boundary conditions. We illustrate the accuracy of our methods for Gaussian and wave packets initial data.
An accurate boundary element method for the exterior elastic scattering problem in two dimensions
NASA Astrophysics Data System (ADS)
Bao, Gang; Xu, Liwei; Yin, Tao
2017-11-01
This paper is concerned with a Galerkin boundary element method solving the two dimensional exterior elastic wave scattering problem. The original problem is first reduced to the so-called Burton-Miller [1] boundary integral formulation, and essential mathematical features of its variational form are discussed. In numerical implementations, a newly-derived and analytically accurate regularization formula [2] is employed for the numerical evaluation of hyper-singular boundary integral operator. A new computational approach is employed based on the series expansions of Hankel functions for the computation of weakly-singular boundary integral operators during the reduction of corresponding Galerkin equations into a discrete linear system. The effectiveness of proposed numerical methods is demonstrated using several numerical examples.
Singular dynamics of a q-difference Painlevé equation in its initial-value space
NASA Astrophysics Data System (ADS)
Joshi, N.; Lobb, S. B.
2016-01-01
We construct the initial-value space of a q-discrete first Painlevé equation explicitly and describe the behaviours of its solutions w(n) in this space as n\\to ∞ , with particular attention paid to neighbourhoods of exceptional lines and irreducible components of the anti-canonical divisor. These results show that trajectories starting in domains bounded away from the origin in initial value space are repelled away from such singular lines. However, the dynamical behaviours in neighbourhoods containing the origin are complicated by the merger of two simple base points at the origin in the limit. We show that these lead to a saddle-point-type behaviour in a punctured neighbourhood of the origin.
Excitation of Continuous and Discrete Modes in Incompressible Boundary Layers
NASA Technical Reports Server (NTRS)
Ashpis, David E.; Reshotko, Eli
1998-01-01
This report documents the full details of the condensed journal article by Ashpis & Reshotko (JFM, 1990) entitled "The Vibrating Ribbon Problem Revisited." A revised formal solution of the vibrating ribbon problem of hydrodynamic stability is presented. The initial formulation of Gaster (JFM, 1965) is modified by application of the Briggs method and a careful treatment of the complex double Fourier transform inversions. Expressions are obtained in a natural way for the discrete spectrum as well as for the four branches of the continuous spectra. These correspond to discrete and branch-cut singularities in the complex wave-number plane. The solutions from the continuous spectra decay both upstream and downstream of the ribbon, with the decay in the upstream direction being much more rapid than that in the downstream direction. Comments and clarification of related prior work are made.
Conditioning of the Stable, Discrete-time Lyapunov Operator
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
The Schatten p-norm condition of the discrete-time Lyapunov operator L(sub A) defined on matrices P is identical with R(sup n X n) by L(sub A) P is identical with P - APA(sup T) is studied for stable matrices A is a member of R(sup n X n). Bounds are obtained for the norm of L(sub A) and its inverse that depend on the spectrum, singular values and radius of stability of A. Since the solution P of the the discrete-time algebraic Lyapunov equation (DALE) L(sub A)P = Q can be ill-conditioned only when either L(sub A) or Q is ill-conditioned, these bounds are useful in determining whether P admits a low-rank approximation, which is important in the numerical solution of the DALE for large n.
On the application of under-decimated filter banks
NASA Technical Reports Server (NTRS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-01-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate. Furthermore, for both systems, the implementation cost of the analysis or synthesis bank is comparable to that of one prototype filter plus some low-complexity modulation matrices. The individual analysis and synthesis filters have complex coefficients in the DFT filter banks but have real coefficients in the cosine modulated filter banks.
On the application of under-decimated filter banks
NASA Astrophysics Data System (ADS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-11-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate.
NASA Astrophysics Data System (ADS)
Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal
2018-06-01
Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.
Discontinuous Galerkin Finite Element Method for Parabolic Problems
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
In this paper, we develop a time and its corresponding spatial discretization scheme, based upon the assumption of a certain weak singularity of parallel ut(t) parallel Lz(omega) = parallel ut parallel2, for the discontinuous Galerkin finite element method for one-dimensional parabolic problems. Optimal convergence rates in both time and spatial variables are obtained. A discussion of automatic time-step control method is also included.
Identification and modification of dominant noise sources in diesel engines
NASA Astrophysics Data System (ADS)
Hayward, Michael D.
Determination of dominant noise sources in diesel engines is an integral step in the creation of quiet engines, but is a process which can involve an extensive series of expensive, time-consuming fired and motored tests. The goal of this research is to determine dominant noise source characteristics of a diesel engine in the near and far-fields with data from fewer tests than is currently required. Pre-conditioning and use of numerically robust methods to solve a set of cross-spectral density equations results in accurate calculation of the transfer paths between the near- and far-field measurement points. Application of singular value decomposition to an input cross-spectral matrix determines the spectral characteristics of a set of independent virtual sources, that, when scaled and added, result in the input cross spectral matrix. Each virtual source power spectral density is a singular value resulting from the decomposition performed over a range of frequencies. The complex relationship between virtual and physical sources is estimated through determination of virtual source contributions to each input measurement power spectral density. The method is made more user-friendly through use of a percentage contribution color plotting technique, where different normalizations can be used to help determine the presence of sources and the strengths of their contributions. Convolution of input measurements with the estimated path impulse responses results in a set of far-field components, to which the same singular value contribution plotting technique can be applied, thus allowing dominant noise source characteristics in the far-field to also be examined. Application of the methods presented results in determination of the spectral characteristics of dominant noise sources both in the near- and far-fields from one fired test, which significantly reduces the need for extensive fired and motored testing. Finally, it is shown that the far-field noise time history of a physically altered engine can be simulated through modification of singular values and recalculation of transfer paths between input and output measurements of previously recorded data.
NASA Astrophysics Data System (ADS)
Vilardy, Juan M.; Giacometto, F.; Torres, C. O.; Mattos, L.
2011-01-01
The two-dimensional Fast Fourier Transform (FFT 2D) is an essential tool in the two-dimensional discrete signals analysis and processing, which allows developing a large number of applications. This article shows the description and synthesis in VHDL code of the FFT 2D with fixed point binary representation using the programming tool Simulink HDL Coder of Matlab; showing a quick and easy way to handle overflow, underflow and the creation registers, adders and multipliers of complex data in VHDL and as well as the generation of test bench for verification of the codes generated in the ModelSim tool. The main objective of development of the hardware architecture of the FFT 2D focuses on the subsequent completion of the following operations applied to images: frequency filtering, convolution and correlation. The description and synthesis of the hardware architecture uses the XC3S1200E family Spartan 3E FPGA from Xilinx Manufacturer.
On the inversion of geodetic integrals defined over the sphere using 1-D FFT
NASA Astrophysics Data System (ADS)
García, R. V.; Alejo, C. A.
2005-08-01
An iterative method is presented which performs inversion of integrals defined over the sphere. The method is based on one-dimensional fast Fourier transform (1-D FFT) inversion and is implemented with the projected Landweber technique, which is used to solve constrained least-squares problems reducing the associated 1-D cyclic-convolution error. The results obtained are as precise as the direct matrix inversion approach, but with better computational efficiency. A case study uses the inversion of Hotine’s integral to obtain gravity disturbances from geoid undulations. Numerical convergence is also analyzed and comparisons with respect to the direct matrix inversion method using conjugate gradient (CG) iteration are presented. Like the CG method, the number of iterations needed to get the optimum (i.e., small) error decreases as the measurement noise increases. Nevertheless, for discrete data given over a whole parallel band, the method can be applied directly without implementing the projected Landweber method, since no cyclic convolution error exists.
Banerjee, Saswatee; Hoshino, Tetsuya; Cole, James B
2008-08-01
We introduce a new implementation of the finite-difference time-domain (FDTD) algorithm with recursive convolution (RC) for first-order Drude metals. We implemented RC for both Maxwell's equations for light polarized in the plane of incidence (TM mode) and the wave equation for light polarized normal to the plane of incidence (TE mode). We computed the Drude parameters at each wavelength using the measured value of the dielectric constant as a function of the spatial and temporal discretization to ensure both the accuracy of the material model and algorithm stability. For the TE mode, where Maxwell's equations reduce to the wave equation (even in a region of nonuniform permittivity) we introduced a wave equation formulation of RC-FDTD. This greatly reduces the computational cost. We used our methods to compute the diffraction characteristics of metallic gratings in the visible wavelength band and compared our results with frequency-domain calculations.
Discrete Fourier transforms of nonuniformly spaced data
NASA Technical Reports Server (NTRS)
Swan, P. R.
1982-01-01
Time series or spatial series of measurements taken with nonuniform spacings have failed to yield fully to analysis using the Discrete Fourier Transform (DFT). This is due to the fact that the formal DFT is the convolution of the transform of the signal with the transform of the nonuniform spacings. Two original methods are presented for deconvolving such transforms for signals containing significant noise. The first method solves a set of linear equations relating the observed data to values defined at uniform grid points, and then obtains the desired transform as the DFT of the uniform interpolates. The second method solves a set of linear equations relating the real and imaginary components of the formal DFT directly to those of the desired transform. The results of numerical experiments with noisy data are presented in order to demonstrate the capabilities and limitations of the methods.
Dynamical Localization for Discrete and Continuous Random Schrödinger Operators
NASA Astrophysics Data System (ADS)
Germinet, F.; De Bièvre, S.
We show for a large class of random Schrödinger operators Ho on and on that dynamical localization holds, i.e. that, with probability one, for a suitable energy interval I and for q a positive real,
Efficient scheme for parametric fitting of data in arbitrary dimensions.
Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching
2008-07-01
We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.
Discrete shearlet transform: faithful digitization concept and its applications
NASA Astrophysics Data System (ADS)
Lim, Wang-Q.
2011-09-01
Over the past years, various representation systems which sparsely approximate functions governed by anisotropic features such as edges in images have been proposed. Alongside the theoretical development of these systems, algorithmic realizations of the associated transforms were provided. However, one of the most common short-comings of these frameworks is the lack of providing a unified treatment of the continuum and digital world, i.e., allowing a digital theory to be a natural digitization of the continuum theory. Shearlets were introduced as means to sparsely encode anisotropic singularities of multivariate data while providing a unified treatment of the continuous and digital realm. In this paper, we introduce a discrete framework which allows a faithful digitization of the continuum domain shearlet transform based on compactly supported shearlets. Finally, we show numerical experiments demonstrating the potential of the discrete shearlet transform in several image processing applications.
NASA Astrophysics Data System (ADS)
You, Soyoung; Goldstein, David
2015-11-01
DNS is employed to simulate turbulent channel flow subject to a traveling wave body force field near the wall. The regions in which forces are applied are made progressively more discrete in a sequence of simulations to explore the boundaries between the effects of discrete flow actuators and spatially continuum actuation. The continuum body force field is designed to correspond to the ``optimal'' resolvent mode of McKeon and Sharma (2010), which has the L2 norm of σ1. That is, the normalized harmonic forcing that gives the largest disturbance energy is the first singular mode with the gain of σ1. 2D and 3D resolvent modes are examined at a modest Reτ of 180. For code validation, nominal flow simulations without discretized forcing are compared to previous work by Sharma and Goldstein (2014) in which we find that as we increase the forcing amplitude there is a decrease in the mean velocity and an increase in turbulent kinetic energy. The same force field is then sampled into isolated sub-domains to emulate the effect of discrete physical actuators. Several cases will be presented to explore the dependencies between the level of discretization and the turbulent flow behavior.
High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities
NASA Astrophysics Data System (ADS)
Britt, Darrell Steven, Jr.
Problems of time-harmonic wave propagation arise in important fields of study such as geological surveying, radar detection/evasion, and aircraft design. These often involve highfrequency waves, which demand high-order methods to mitigate the dispersion error. We propose a high-order method for computing solutions to the variable-coefficient inhomogeneous Helmholtz equation in two dimensions on domains bounded by piecewise smooth curves of arbitrary shape with a finite number of boundary singularities at known locations. We utilize compact finite difference (FD) schemes on regular structured grids to achieve highorder accuracy due to their efficiency and simplicity, as well as the capability to approximate variable-coefficient differential operators. In this work, a 4th-order compact FD scheme for the variable-coefficient Helmholtz equation on a Cartesian grid in 2D is derived and tested. The well known limitation of finite differences is that they lose accuracy when the boundary curve does not coincide with the discretization grid, which is a severe restriction on the geometry of the computational domain. Therefore, the algorithm presented in this work combines high-order FD schemes with the method of difference potentials (DP), which retains the efficiency of FD while allowing for boundary shapes that are not aligned with the grid without sacrificing the accuracy of the FD scheme. Additionally, the theory of DP allows for the universal treatment of the boundary conditions. One of the significant contributions of this work is the development of an implementation that accommodates general boundary conditions (BCs). In particular, Robin BCs with discontinuous coefficients are studied, for which we introduce a piecewise parameterization of the boundary curve. Problems with discontinuities in the boundary data itself are also studied. We observe that the design convergence rate suffers whenever the solution loses regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE. For this reason, we implement the method of singularity subtraction as a means for restoring the design accuracy of the scheme in the presence of singularities at the boundary. While this method is well studied for low order methods and for problems in which singularities arise from the geometry (e.g., corners), we adapt it to our high-order scheme for curved boundaries via a conformal mapping and show that it can also be used to restore accuracy when the singularity arises from the BCs rather than the geometry. Altogether, the proposed methodology for 2D boundary value problems is computationally efficient, easily handles a wide class of boundary conditions and boundary shapes that are not aligned with the discretization grid, and requires little modification for solving new problems.
Lens elliptic gamma function solution of the Yang-Baxter equation at roots of unity
NASA Astrophysics Data System (ADS)
Kels, Andrew P.; Yamazaki, Masahito
2018-02-01
We study the root of unity limit of the lens elliptic gamma function solution of the star-triangle relation, for an integrable model with continuous and discrete spin variables. This limit involves taking an elliptic nome to a primitive rNth root of unity, where r is an existing integer parameter of the lens elliptic gamma function, and N is an additional integer parameter. This is a singular limit of the star-triangle relation, and at subleading order of an asymptotic expansion, another star-triangle relation is obtained for a model with discrete spin variables in {Z}rN . Some special choices of solutions of equation of motion are shown to result in well-known discrete spin solutions of the star-triangle relation. The saddle point equations themselves are identified with three-leg forms of ‘3D-consistent’ classical discrete integrable equations, known as Q4 and Q3(δ=0) . We also comment on the implications for supersymmetric gauge theories, and in particular comment on a close parallel with the works of Nekrasov and Shatashvili.
Analytic wave solution with helicon and Trivelpiece-Gould modes in an annular plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlsson, Johan; Pavarin, Daniele; Walker, Mitchell
2009-11-26
Helicon sources in an annular configuration have applications for plasma thrusters. The theory of Klozenberg et al.[J. P. Klozenberg B. McNamara and P. C. Thonemann, J. Fluid Mech. 21(1965) 545-563] for the propagation and absorption of helicon and Trivelpiece-Gould modes in a cylindrical plasma has been generalized for annular plasmas. Analytic solutions are found also in the annular case, but in the presence of both helicon and Trivelpiece-Gould modes, a heterogeneous linear system of equations must be solved to match the plasma and inner and outer vacuum solutions. The linear system can be ill-conditioned or even exactly singular, leading tomore » a dispersion relation with a discrete set of discontinuities. The coefficients for the analytic solution are calculated by solving the linear system with singular-value decomposition.« less
Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai
2018-01-01
In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.
NASA Astrophysics Data System (ADS)
Zhang, Liangyin; Chen, Michael Z. Q.; Li, Chanying
2017-07-01
In this paper, two new pairs of dual continuous-time algebraic Riccati equations (CAREs) and dual discrete-time algebraic Riccati equations (DAREs) are proposed. The dual DAREs are first studied with some nonsingularity assumptions on the system matrix and the parameter matrix. Then, in the case of singular matrices, a generalised inverse is introduced to deal with the dual DARE problem. These dual AREs can easily lead us to an iterative procedure for finding the anti-stabilising solutions, especially to DARE, by means of that for the stabilising solutions. Furthermore, we provide the counterpart results on the set of all solutions to DARE inspired by the results for CARE. Two examples are presented to illustrate the theoretical results.
Image restoration consequences of the lack of a two variable fundamental theorem of algebra
NASA Technical Reports Server (NTRS)
Kreznar, J. E.
1977-01-01
It has been shown that, at least for one pair of otherwise attractive spaces of images and operators, singular convolution operators do not necessarily have nonsingular neighbors. This result is a nuisance in image restoration. It is suggested that this difficulty might be overcome if the following three conditions are satisfied: (1) a weaker constraint than absolute summability can be identified for useful operators: (2) if the z-transform of an operator has at most a finite number of zeros on the unit torus, then the inverse z-transform formula yields an inverse operator meeting the weaker constraint: and (3) operators whose z-transforms are zero in a set of real, closed curves on the unit torus have neighbors which are zero in only a finite set of points on the unit torus.
Classification of crystal structure using a convolutional neural network
Park, Woon Bae; Chung, Jiyong; Sohn, Keemin; Pyo, Myoungho
2017-01-01
A deep machine-learning technique based on a convolutional neural network (CNN) is introduced. It has been used for the classification of powder X-ray diffraction (XRD) patterns in terms of crystal system, extinction group and space group. About 150 000 powder XRD patterns were collected and used as input for the CNN with no handcrafted engineering involved, and thereby an appropriate CNN architecture was obtained that allowed determination of the crystal system, extinction group and space group. In sharp contrast with the traditional use of powder XRD pattern analysis, the CNN never treats powder XRD patterns as a deconvoluted and discrete peak position or as intensity data, but instead the XRD patterns are regarded as nothing but a pattern similar to a picture. The CNN interprets features that humans cannot recognize in a powder XRD pattern. As a result, accuracy levels of 81.14, 83.83 and 94.99% were achieved for the space-group, extinction-group and crystal-system classifications, respectively. The well trained CNN was then used for symmetry identification of unknown novel inorganic compounds. PMID:28875035
Classification of crystal structure using a convolutional neural network.
Park, Woon Bae; Chung, Jiyong; Jung, Jaeyoung; Sohn, Keemin; Singh, Satendra Pal; Pyo, Myoungho; Shin, Namsoo; Sohn, Kee-Sun
2017-07-01
A deep machine-learning technique based on a convolutional neural network (CNN) is introduced. It has been used for the classification of powder X-ray diffraction (XRD) patterns in terms of crystal system, extinction group and space group. About 150 000 powder XRD patterns were collected and used as input for the CNN with no handcrafted engineering involved, and thereby an appropriate CNN architecture was obtained that allowed determination of the crystal system, extinction group and space group. In sharp contrast with the traditional use of powder XRD pattern analysis, the CNN never treats powder XRD patterns as a deconvoluted and discrete peak position or as intensity data, but instead the XRD patterns are regarded as nothing but a pattern similar to a picture. The CNN interprets features that humans cannot recognize in a powder XRD pattern. As a result, accuracy levels of 81.14, 83.83 and 94.99% were achieved for the space-group, extinction-group and crystal-system classifications, respectively. The well trained CNN was then used for symmetry identification of unknown novel inorganic compounds.
Edelman, Mark
2015-07-01
In this paper, we consider a simple general form of a deterministic system with power-law memory whose state can be described by one variable and evolution by a generating function. A new value of the system's variable is a total (a convolution) of the generating functions of all previous values of the variable with weights, which are powers of the time passed. In discrete cases, these systems can be described by difference equations in which a fractional difference on the left hand side is equal to a total (also a convolution) of the generating functions of all previous values of the system's variable with the fractional Eulerian number weights on the right hand side. In the continuous limit, the considered systems can be described by the Grünvald-Letnikov fractional differential equations, which are equivalent to the Volterra integral equations of the second kind. New properties of the fractional Eulerian numbers and possible applications of the results are discussed.
{lambda} elements for singular problems in CFD: Viscoelastic fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, K.K.; Surana, K.S.
1996-10-01
This paper presents two dimensional {lambda} element formulation for viscoelastic fluid flow containing point singularities in the flow field. The flow of viscoelastic fluid even without singularities are a difficult class of problems for increasing Deborah number or Weissenburg number due to increased dominance of convective terms and thus increased hyperbolicity. In the present work the equations of fluid motion and the constitutive laws are recast in the form of a first order system of coupled equations with the use of auxiliary variables. The velocity, pressure and stresses are interpolated using equal order C{sup 0} {lambda} element approximations. The Leastmore » Squares Finite Element Method (LSFEM) is used to construct the integral form (error functional I) corresponding to these equations. The error functional is constructed by taking the integrated sum of the squares of the errors or residuals (over the whole discretization) resulting when the element approximation is substituted into these equations. The conditions resulting from the minimization of the error functional are satisfied by using Newton`s method with line search. LSFEM has much superior performance when dealing with non-linear and convection dominated problems.« less
Finite-time singularity signature of hyperinflation
NASA Astrophysics Data System (ADS)
Sornette, D.; Takayasu, H.; Zhou, W.-X.
2003-07-01
We present a novel analysis extending the recent work of Mizuno et al. (Physica A 308 (2002) 411) on the hyperinflations of Germany (1920/1/1-1923/11/1), Hungary (1945/4/30-1946/7/15), Brazil (1969-1994), Israel (1969-1985), Nicaragua (1969-1991), Peru (1969-1990) and Bolivia (1969-1985). On the basis of a generalization of Cagan's model of inflation based on the mechanism of “inflationary expectation” of positive feedbacks between realized growth rate and people's expected growth rate, we find that hyperinflations can be characterized by a power law singularity culminating at a critical time tc. Mizuno et al.'s double-exponential function can be seen as a discrete time-step approximation of our more general non-linear ODE formulation of the price dynamics which exhibits a finite-time singular behavior. This extension of Cagan's model, which makes natural the appearance of a critical time tc, has the advantage of providing a well-defined end of the clearly unsustainable hyperinflation regime. We find an excellent and reliable agreement between theory and data for Germany, Hungary, Peru and Bolivia. For Brazil, Israel and Nicaragua, the super-exponential growth seems to be already contaminated significantly by the existence of a cross-over to a stationary regime.
Stochastic mixed-mode oscillations in a three-species predator-prey model
NASA Astrophysics Data System (ADS)
Sadhu, Susmita; Kuehn, Christian
2018-03-01
The effect of demographic stochasticity, in the form of Gaussian white noise, in a predator-prey model with one fast and two slow variables is studied. We derive the stochastic differential equations (SDEs) from a discrete model. For suitable parameter values, the deterministic drift part of the model admits a folded node singularity and exhibits a singular Hopf bifurcation. We focus on the parameter regime near the Hopf bifurcation, where small amplitude oscillations exist as stable dynamics in the absence of noise. In this regime, the stochastic model admits noise-driven mixed-mode oscillations (MMOs), which capture the intermediate dynamics between two cycles of population outbreaks. We perform numerical simulations to calculate the distribution of the random number of small oscillations between successive spikes for varying noise intensities and distance to the Hopf bifurcation. We also study the effect of noise on a suitable Poincaré map. Finally, we prove that the stochastic model can be transformed into a normal form near the folded node, which can be linked to recent results on the interplay between deterministic and stochastic small amplitude oscillations. The normal form can also be used to study the parameter influence on the noise level near folded singularities.
Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.
Emotional aging: a discrete emotions perspective.
Kunzmann, Ute; Kappes, Cathleen; Wrosch, Carsten
2014-01-01
Perhaps the most important single finding in the field of emotional aging has been that the overall quality of affective experience steadily improves during adulthood and can be maintained into old age. Recent lifespan developmental theories have provided motivation- and experience-based explanations for this phenomenon. These theories suggest that, as individuals grow older, they become increasingly motivated and able to regulate their emotions, which could result in reduced negativity and enhanced positivity. The objective of this paper is to expand existing theories and empirical research on emotional aging by presenting a discrete emotions perspective. To illustrate the usefulness of this approach, we focus on a discussion of the literature examining age differences in anger and sadness. These two negative emotions have typically been subsumed under the singular concept of negative affect. From a discrete emotions perspective, however, they are highly distinct and show multidirectional age differences. We propose that such contrasting age differences in specific negative emotions have important implications for our understanding of long-term patterns of affective well-being across the adult lifespan.
Szidarovszky, Tamás; Császár, Attila G; Czakó, Gábor
2010-08-01
Several techniques of varying efficiency are investigated, which treat all singularities present in the triatomic vibrational kinetic energy operator given in orthogonal internal coordinates of the two distances-one angle type. The strategies are based on the use of a direct-product basis built from one-dimensional discrete variable representation (DVR) bases corresponding to the two distances and orthogonal Legendre polynomials, or the corresponding Legendre-DVR basis, corresponding to the angle. The use of Legendre functions ensures the efficient treatment of the angular singularity. Matrix elements of the singular radial operators are calculated employing DVRs using the quadrature approximation as well as special DVRs satisfying the boundary conditions and thus allowing for the use of exact DVR expressions. Potential optimized (PO) radial DVRs, based on one-dimensional Hamiltonians with potentials obtained by fixing or relaxing the two non-active coordinates, are also studied. The numerical calculations employed Hermite-DVR, spherical-oscillator-DVR, and Bessel-DVR bases as the primitive radial functions. A new analytical formula is given for the determination of the matrix elements of the singular radial operator using the Bessel-DVR basis. The usually claimed failure of the quadrature approximation in certain singular integrals is revisited in one and three dimensions. It is shown that as long as no potential optimization is carried out the quadrature approximation works almost as well as the exact DVR expressions. If wave functions with finite amplitude at the boundary are to be computed, the basis sets need to meet the required boundary conditions. The present numerical results also confirm that PO-DVRs should be constructed employing relaxed potentials and PO-DVRs can be useful for optimizing quadrature points for calculations applying large coordinate intervals and describing large-amplitude motions. The utility and efficiency of the different algorithms is demonstrated by the computation of converged near-dissociation vibrational energy levels for the H molecular ion.
Log-Concavity and Strong Log-Concavity: a review
Saumard, Adrien; Wellner, Jon A.
2016-01-01
We review and formulate results concerning log-concavity and strong-log-concavity in both discrete and continuous settings. We show how preservation of log-concavity and strongly log-concavity on ℝ under convolution follows from a fundamental monotonicity result of Efron (1969). We provide a new proof of Efron's theorem using the recent asymmetric Brascamp-Lieb inequality due to Otto and Menz (2013). Along the way we review connections between log-concavity and other areas of mathematics and statistics, including concentration of measure, log-Sobolev inequalities, convex geometry, MCMC algorithms, Laplace approximations, and machine learning. PMID:27134693
NASA Technical Reports Server (NTRS)
Stanley, William D.
1994-01-01
An investigation of the Allan variance method as a possible means for characterizing fluctuations in radiometric noise diodes has been performed. The goal is to separate fluctuation components into white noise, flicker noise, and random-walk noise. The primary means is by discrete-time processing, and the study focused primarily on the digital processes involved. Noise satisfying the requirements was generated by direct convolution, fast Fourier transformation (FFT) processing in the time domain, and FFT processing in the frequency domain. Some of the numerous results obtained are presented along with the programs used in the study.
A pseudospectra-based approach to non-normal stability of embedded boundary methods
NASA Astrophysics Data System (ADS)
Rapaka, Narsimha; Samtaney, Ravi
2017-11-01
We present non-normal linear stability of embedded boundary (EB) methods employing pseudospectra and resolvent norms. Stability of the discrete linear wave equation is characterized in terms of the normalized distance of the EB to the nearest ghost node (α) in one and two dimensions. An important objective is that the CFL condition based on the Cartesian grid spacing remains unaffected by the EB. We consider various discretization methods including both central and upwind-biased schemes. Stability is guaranteed when α <=αmax ranges between 0.5 and 0.77 depending on the discretization scheme. Also, the stability characteristics remain the same in both one and two dimensions. Sharper limits on the sufficient conditions for stability are obtained based on the pseudospectral radius (the Kreiss constant) than the restrictive limits based on the usual singular value decomposition analysis. We present a simple and robust reclassification scheme for the ghost cells (``hybrid ghost cells'') to ensure Lax stability of the discrete systems. This has been tested successfully for both low and high order discretization schemes with transient growth of at most O (1). Moreover, we present a stable, fourth order EB reconstruction scheme. Supported by the KAUST Office of Competitive Research Funds under Award No. URF/1/1394-01.
CMDS9: Continuum Mechanics and Discrete Systems 9, Istanbul Technical University, Macka. Abstracts.
1998-07-01
that can only be achieved via cooperative behavior of the cells. It can be viewed as the action of a singular feedback between the micro -level (the...optimal micro -geometries of multicomponent mixtures. Also, we discuss dynamics of a transition in natural unstable systems that leads to a micro ...failure process. This occurs once the impact load reaches a critical threshold level and results in a collection of oriented matrix micro -cracks
Eulerian Dynamics with a Commutator Forcing
2017-01-09
SIAM Review 56(4) (2014) 577–621. [Pes2015] J. Peszek. Discrete Cucker-Smale flocking model with a weakly singular weight. SIAM J. Math . Anal., to...viscosities in bounded domains. J. Math . Pures Appl. (9), 87(2):227– 235, 2007. [CV2010] L. Caffarelli, A. Vasseur, Drift diffusion equations with...Further time regularity for fully non-linear parabolic equations. Math . Res. Lett., 22(6):1749–1766, 2015. [CCTT2016] José A. Carrillo, Young-Pil
Warrick, P A; Precup, D; Hamilton, E F; Kearney, R E
2007-01-01
To develop a singular-spectrum analysis (SSA) based change-point detection algorithm applicable to fetal heart rate (FHR) monitoring to improve the detection of deceleration events. We present a method for decomposing a signal into near-orthogonal components via the discrete cosine transform (DCT) and apply this in a novel online manner to change-point detection based on SSA. The SSA technique forms models of the underlying signal that can be compared over time; models that are sufficiently different indicate signal change points. To adapt the algorithm to deceleration detection where many successive similar change events can occur, we modify the standard SSA algorithm to hold the reference model constant under such conditions, an approach that we term "base-hold SSA". The algorithm is applied to a database of 15 FHR tracings that have been preprocessed to locate candidate decelerations and is compared to the markings of an expert obstetrician. Of the 528 true and 1285 false decelerations presented to the algorithm, the base-hold approach improved on standard SSA, reducing the number of missed decelerations from 64 to 49 (21.9%) while maintaining the same reduction in false-positives (278). The standard SSA assumption that changes are infrequent does not apply to FHR analysis where decelerations can occur successively and in close proximity; our base-hold SSA modification improves detection of these types of event series.
Singular perturbations and time scales in the design of digital flight control systems
NASA Technical Reports Server (NTRS)
Naidu, Desineni S.; Price, Douglas B.
1988-01-01
The results are presented of application of the methodology of Singular Perturbations and Time Scales (SPATS) to the control of digital flight systems. A block diagonalization method is described to decouple a full order, two time (slow and fast) scale, discrete control system into reduced order slow and fast subsystems. Basic properties and numerical aspects of the method are discussed. A composite, closed-loop, suboptimal control system is constructed as the sum of the slow and fast optimal feedback controls. The application of this technique to an aircraft model shows close agreement between the exact solutions and the decoupled (or composite) solutions. The main advantage of the method is the considerable reduction in the overall computational requirements for the evaluation of optimal guidance and control laws. The significance of the results is that it can be used for real time, onboard simulation. A brief survey is also presented of digital flight systems.
A fully Sinc-Galerkin method for Euler-Bernoulli beam models
NASA Technical Reports Server (NTRS)
Smith, R. C.; Bowers, K. L.; Lund, J.
1990-01-01
A fully Sinc-Galerkin method in both space and time is presented for fourth-order time-dependent partial differential equations with fixed and cantilever boundary conditions. The Sinc discretizations for the second-order temporal problem and the fourth-order spatial problems are presented. Alternate formulations for variable parameter fourth-order problems are given which prove to be especially useful when applying the forward techniques to parameter recovery problems. The discrete system which corresponds to the time-dependent partial differential equations of interest are then formulated. Computational issues are discussed and a robust and efficient algorithm for solving the resulting matrix system is outlined. Numerical results which highlight the method are given for problems with both analytic and singular solutions as well as fixed and cantilever boundary conditions.
Shape functions for velocity interpolation in general hexahedral cells
Naff, R.L.; Russell, T.F.; Wilson, J.D.
2002-01-01
Numerical methods for grids with irregular cells require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element (CVMFE) methods, vector shape functions approximate velocities and vector test functions enforce a discrete form of Darcy's law. In this paper, a new vector shape function is developed for use with irregular, hexahedral cells (trilinear images of cubes). It interpolates velocities and fluxes quadratically, because as shown here, the usual Piola-transformed shape functions, which interpolate linearly, cannot match uniform flow on general hexahedral cells. Truncation-error estimates for the shape function are demonstrated. CVMFE simulations of uniform and non-uniform flow with irregular meshes show first- and second-order convergence of fluxes in the L2 norm in the presence and absence of singularities, respectively.
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamba, Irene M.; ICES, The University of Texas at Austin, 201 E. 24th St., Stop C0200, Austin, TX 78712; Haack, Jeffrey R.
2014-08-01
We present the formulation of a conservative spectral method for the Boltzmann collision operator with anisotropic scattering cross-sections. The method is an extension of the conservative spectral method of Gamba and Tharkabhushanam [17,18], which uses the weak form of the collision operator to represent the collisional term as a weighted convolution in Fourier space. The method is tested by computing the collision operator with a suitably cut-off angular cross section and comparing the results with the solution of the Landau equation. We analytically study the convergence rate of the Fourier transformed Boltzmann collision operator in the grazing collisions limit tomore » the Fourier transformed Landau collision operator under the assumption of some regularity and decay conditions of the solution to the Boltzmann equation. Our results show that the angular singularity which corresponds to the Rutherford scattering cross section is the critical singularity for which a grazing collision limit exists for the Boltzmann operator. Additionally, we numerically study the differences between homogeneous solutions of the Boltzmann equation with the Rutherford scattering cross section and an artificial cross section, which give convergence to solutions of the Landau equation at different asymptotic rates. We numerically show the rate of the approximation as well as the consequences for the rate of entropy decay for homogeneous solutions of the Boltzmann equation and Landau equation.« less
Discrete Inverse and State Estimation Problems
NASA Astrophysics Data System (ADS)
Wunsch, Carl
2006-06-01
The problems of making inferences about the natural world from noisy observations and imperfect theories occur in almost all scientific disciplines. This book addresses these problems using examples taken from geophysical fluid dynamics. It focuses on discrete formulations, both static and time-varying, known variously as inverse, state estimation or data assimilation problems. Starting with fundamental algebraic and statistical ideas, the book guides the reader through a range of inference tools including the singular value decomposition, Gauss-Markov and minimum variance estimates, Kalman filters and related smoothers, and adjoint (Lagrange multiplier) methods. The final chapters discuss a variety of practical applications to geophysical flow problems. Discrete Inverse and State Estimation Problems is an ideal introduction to the topic for graduate students and researchers in oceanography, meteorology, climate dynamics, and geophysical fluid dynamics. It is also accessible to a wider scientific audience; the only prerequisite is an understanding of linear algebra. Provides a comprehensive introduction to discrete methods of inference from incomplete information Based upon 25 years of practical experience using real data and models Develops sequential and whole-domain analysis methods from simple least-squares Contains many examples and problems, and web-based support through MIT opencourseware
Current problems in applied mathematics and mathematical physics
NASA Astrophysics Data System (ADS)
Samarskii, A. A.
Papers are presented on such topics as mathematical models in immunology, mathematical problems of medical computer tomography, classical orthogonal polynomials depending on a discrete variable, and boundary layer methods for singular perturbation problems in partial derivatives. Consideration is also given to the computer simulation of supernova explosion, nonstationary internal waves in a stratified fluid, the description of turbulent flows by unsteady solutions of the Navier-Stokes equations, and the reduced Galerkin method for external diffraction problems using the spline approximation of fields.
NASA Astrophysics Data System (ADS)
LaRue, James P.; Luzanov, Yuriy
2013-05-01
A new extension to the way in which the Bidirectional Associative Memory (BAM) algorithms are implemented is presented here. We will show that by utilizing the singular value decomposition (SVD) and integrating principles of independent component analysis (ICA) into the nullspace (NS) we have created a novel approach to mitigating spurious attractors. We demonstrate this with two applications. The first application utilizes a one-layer association while the second application is modeled after the several hierarchal associations of ventral pathways. The first application will detail the way in which we manage the associations in terms of matrices. The second application will take what we have learned from the first example and apply it to a cascade of a convolutional neural network (CNN) and perceptron this being our signal processing model of the ventral pathways, i.e., visual systems.
Fast automated analysis of strong gravitational lenses with convolutional neural networks.
Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J
2017-08-30
Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
NASA Astrophysics Data System (ADS)
Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.
2014-12-01
Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.
QCDNUM: Fast QCD evolution and convolution
NASA Astrophysics Data System (ADS)
Botje, M.
2011-02-01
The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline coefficients by solving (coupled) triangular matrix equations with a forward substitution algorithm. Fast computation of convolution integrals as weighted sums of spline coefficients, with weights derived from user-given convolution kernels. Restrictions: Accuracy and speed are determined by the density of the evolution grid. Running time: Less than 10 ms on a 2 GHz Intel Core 2 Duo processor to evolve the gluon density and 12 quark densities at next-to-next-to-leading order over a large kinematic range.
Zhou, Xian; Zhong, Kangping; Gao, Yuliang; Sui, Qi; Dong, Zhenghua; Yuan, Jinhui; Wang, Liang; Long, Keping; Lau, Alan Pak Tao; Lu, Chao
2015-04-06
Discrete multi-tone (DMT) modulation is an attractive modulation format for short-reach applications to achieve the best use of available channel bandwidth and signal noise ratio (SNR). In order to realize polarization-multiplexed DMT modulation with direct detection, we derive an analytical transmission model for dual polarizations with intensity modulation and direct diction (IM-DD) in this paper. Based on the model, we propose a novel polarization-interleave-multiplexed DMT modulation with direct diction (PIM-DMT-DD) transmission system, where the polarization de-multiplexing can be achieved by using a simple multiple-input-multiple-output (MIMO) equalizer and the transmission performance is optimized over two distinct received polarization states to eliminate the singularity issue of MIMO demultiplexing algorithms. The feasibility and effectiveness of the proposed PIM-DMT-DD system are investigated via theoretical analyses and simulation studies.
Exact folded-band chaotic oscillator.
Corron, Ned J; Blakely, Jonathan N
2012-06-01
An exactly solvable chaotic oscillator with folded-band dynamics is shown. The oscillator is a hybrid dynamical system containing a linear ordinary differential equation and a nonlinear switching condition. Bounded oscillations are provably chaotic, and successive waveform maxima yield a one-dimensional piecewise-linear return map with segments of both positive and negative slopes. Continuous-time dynamics exhibit a folded-band topology similar to Rössler's oscillator. An exact solution is written as a linear convolution of a fixed basis pulse and a discrete binary sequence, from which an equivalent symbolic dynamics is obtained. The folded-band topology is shown to be dependent on the symbol grammar.
Congdon, Peter
2014-04-01
Health data may be collected across one spatial framework (e.g. health provider agencies), but contrasts in health over another spatial framework (neighbourhoods) may be of policy interest. In the UK, population prevalence totals for chronic diseases are provided for populations served by general practitioner practices, but not for neighbourhoods (small areas of circa 1500 people), raising the question whether data for one framework can be used to provide spatially interpolated estimates of disease prevalence for the other. A discrete process convolution is applied to this end and has advantages when there are a relatively large number of area units in one or other framework. Additionally, the interpolation is modified to take account of the observed neighbourhood indicators (e.g. hospitalisation rates) of neighbourhood disease prevalence. These are reflective indicators of neighbourhood prevalence viewed as a latent construct. An illustrative application is to prevalence of psychosis in northeast London, containing 190 general practitioner practices and 562 neighbourhoods, including an assessment of sensitivity to kernel choice (e.g. normal vs exponential). This application illustrates how a zero-inflated Poisson can be used as the likelihood model for a reflective indicator.
Gottschlich, Carsten
2016-01-01
We present a new type of local image descriptor which yields binary patterns from small image patches. For the application to fingerprint liveness detection, we achieve rotation invariant image patches by taking the fingerprint segmentation and orientation field into account. We compute the discrete cosine transform (DCT) for these rotation invariant patches and attain binary patterns by comparing pairs of two DCT coefficients. These patterns are summarized into one or more histograms per image. Each histogram comprises the relative frequencies of pattern occurrences. Multiple histograms are concatenated and the resulting feature vector is used for image classification. We name this novel type of descriptor convolution comparison pattern (CCP). Experimental results show the usefulness of the proposed CCP descriptor for fingerprint liveness detection. CCP outperforms other local image descriptors such as LBP, LPQ and WLD on the LivDet 2013 benchmark. The CCP descriptor is a general type of local image descriptor which we expect to prove useful in areas beyond fingerprint liveness detection such as biological and medical image processing, texture recognition, face recognition and iris recognition, liveness detection for face and iris images, and machine vision for surface inspection and material classification. PMID:26844544
Antoniades, Andreas; Spyrou, Loukianos; Martin-Lopez, David; Valentin, Antonio; Alarcon, Gonzalo; Sanei, Saeid; Cheong Took, Clive
2017-12-01
Detection algorithms for electroencephalography (EEG) data, especially in the field of interictal epileptiform discharge (IED) detection, have traditionally employed handcrafted features, which utilized specific characteristics of neural responses. Although these algorithms achieve high accuracy, mere detection of an IED holds little clinical significance. In this paper, we consider deep learning for epileptic subjects to accommodate automatic feature generation from intracranial EEG data, while also providing clinical insight. Convolutional neural networks are trained in a subject independent fashion to demonstrate how meaningful features are automatically learned in a hierarchical process. We illustrate how the convolved filters in the deepest layers provide insight toward the different types of IEDs within the group, as confirmed by our expert clinicians. The morphology of the IEDs found in filters can help evaluate the treatment of a patient. To improve the learning of the deep model, moderately different score classes are utilized as opposed to binary IED and non-IED labels. The resulting model achieves state-of-the-art classification performance and is also invariant to time differences between the IEDs. This paper suggests that deep learning is suitable for automatic feature generation from intracranial EEG data, while also providing insight into the data.
A discrete mechanics approach to dislocation dynamics in BCC crystals
NASA Astrophysics Data System (ADS)
Ramasubramaniam, A.; Ariza, M. P.; Ortiz, M.
2007-03-01
A discrete mechanics approach to modeling the dynamics of dislocations in BCC single crystals is presented. Ideas are borrowed from discrete differential calculus and algebraic topology and suitably adapted to crystal lattices. In particular, the extension of a crystal lattice to a CW complex allows for convenient manipulation of forms and fields defined over the crystal. Dislocations are treated within the theory as energy-minimizing structures that lead to locally lattice-invariant but globally incompatible eigendeformations. The discrete nature of the theory eliminates the need for regularization of the core singularity and inherently allows for dislocation reactions and complicated topological transitions. The quantization of slip to integer multiples of the Burgers' vector leads to a large integer optimization problem. A novel approach to solving this NP-hard problem based on considerations of metastability is proposed. A numerical example that applies the method to study the emanation of dislocation loops from a point source of dilatation in a large BCC crystal is presented. The structure and energetics of BCC screw dislocation cores, as obtained via the present formulation, are also considered and shown to be in good agreement with available atomistic studies. The method thus provides a realistic avenue for mesoscale simulations of dislocation based crystal plasticity with fully atomistic resolution.
Quasi-Optimal Elimination Trees for 2D Grids with Singularities
Paszyńska, A.; Paszyński, M.; Jopek, K.; ...
2015-01-01
We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less
Quasi-Optimal Elimination Trees for 2D Grids with Singularities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paszyńska, A.; Paszyński, M.; Jopek, K.
We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less
Sensitivity analysis of reactive ecological dynamics.
Verdy, Ariane; Caswell, Hal
2008-08-01
Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.
NASA Astrophysics Data System (ADS)
Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.
1997-02-01
We describe a new attenuation correction method for fluorescent X-ray computed tomography (FXCT) applied to image nonradioactive contrast materials in vivo. The principle of the FXCT imaging is that of computed tomography of the first generation. Using monochromatized synchrotron radiation from the BLNE-5A bending-magnet beam line of Tristan Accumulation Ring in KEK, Japan, we studied phantoms with the FXCT method, and we succeeded in delineating a 4-mm-diameter channel filled with a 500 /spl mu/g I/ml iodine solution in a 20-mm-diameter acrylic cylindrical phantom. However, to detect smaller iodine concentrations, attenuation correction is needed. We present a correction method based on the equation representing the measurement process. The discretized equation system is solved by the least-squares method using the singular value decomposition. The attenuation correction method is applied to the projections by the Monte Carlo simulation and the experiment to confirm its effectiveness.
NASA Technical Reports Server (NTRS)
Turc, Catalin; Anand, Akash; Bruno, Oscar; Chaubell, Julian
2011-01-01
We present a computational methodology (a novel Nystrom approach based on use of a non-overlapping patch technique and Chebyshev discretizations) for efficient solution of problems of acoustic and electromagnetic scattering by open surfaces. Our integral equation formulations (1) Incorporate, as ansatz, the singular nature of open-surface integral-equation solutions, and (2) For the Electric Field Integral Equation (EFIE), use analytical regularizes that effectively reduce the number of iterations required by iterative linear-algebra solution based on Krylov-subspace iterative solvers.
Finite Differences and Collocation Methods for the Solution of the Two Dimensional Heat Equation
NASA Technical Reports Server (NTRS)
Kouatchou, Jules
1999-01-01
In this paper we combine finite difference approximations (for spatial derivatives) and collocation techniques (for the time component) to numerically solve the two dimensional heat equation. We employ respectively a second-order and a fourth-order schemes for the spatial derivatives and the discretization method gives rise to a linear system of equations. We show that the matrix of the system is non-singular. Numerical experiments carried out on serial computers, show the unconditional stability of the proposed method and the high accuracy achieved by the fourth-order scheme.
Segmented strings coupled to a B-field
NASA Astrophysics Data System (ADS)
Vegh, David
2018-04-01
In this paper we study segmented strings in AdS3 coupled to a background two-form whose field strength is proportional to the volume form. By changing the coupling, the theory interpolates between the Nambu-Goto string and the SL(2, ℝ) Wess-Zumino-Witten model. In terms of the kink momentum vectors, the action is independent of the coupling and the classical theory reduces to a single discrete-time Toda-type theory. The WZW model is a singular point in coupling space where the map into Toda variables degenerates.
A physics based multiscale modeling of cavitating flows.
Ma, Jingsen; Hsiao, Chao-Tsung; Chahine, Georges L
2017-03-02
Numerical modeling of cavitating bubbly flows is challenging due to the wide range of characteristic lengths of the physics at play: from micrometers (e.g., bubble nuclei radius) to meters (e.g., propeller diameter or sheet cavity length). To address this, we present here a multiscale approach which integrates a Discrete Singularities Model (DSM) for dispersed microbubbles and a two-phase Navier Stokes solver for the bubbly medium, which includes a level set approach to describe large cavities or gaseous pockets. Inter-scale schemes are used to smoothly bridge the two transitioning subgrid DSM bubbles into larger discretized cavities. This approach is demonstrated on several problems including cavitation inception and vapor core formation in a vortex flow, sheet-to-cloud cavitation over a hydrofoil, cavitation behind a blunt body, and cavitation on a propeller. These examples highlight the capabilities of the developed multiscale model in simulating various form of cavitation.
Barriers to Achieving Textbook Multigrid Efficiency (TME) in CFD
NASA Technical Reports Server (NTRS)
Brandt, Achi
1998-01-01
As a guide to attaining this optimal performance for general CFD problems, the table below lists every foreseen kind of computational difficulty for achieving that goal, together with the possible ways for resolving that difficulty, their current state of development, and references. Included in the table are staggered and nonstaggered, conservative and nonconservative discretizations of viscous and inviscid, incompressible and compressible flows at various Mach numbers, as well as a simple (algebraic) turbulence model and comments on chemically reacting flows. The listing of associated computational barriers involves: non-alignment of streamlines or sonic characteristics with the grids; recirculating flows; stagnation points; discretization and relaxation on and near shocks and boundaries; far-field artificial boundary conditions; small-scale singularities (meaning important features, such as the complete airplane, which are not visible on some of the coarse grids); large grid aspect ratios; boundary layer resolution; and grid adaption.
A physics based multiscale modeling of cavitating flows
Ma, Jingsen; Hsiao, Chao-Tsung; Chahine, Georges L.
2018-01-01
Numerical modeling of cavitating bubbly flows is challenging due to the wide range of characteristic lengths of the physics at play: from micrometers (e.g., bubble nuclei radius) to meters (e.g., propeller diameter or sheet cavity length). To address this, we present here a multiscale approach which integrates a Discrete Singularities Model (DSM) for dispersed microbubbles and a two-phase Navier Stokes solver for the bubbly medium, which includes a level set approach to describe large cavities or gaseous pockets. Inter-scale schemes are used to smoothly bridge the two transitioning subgrid DSM bubbles into larger discretized cavities. This approach is demonstrated on several problems including cavitation inception and vapor core formation in a vortex flow, sheet-to-cloud cavitation over a hydrofoil, cavitation behind a blunt body, and cavitation on a propeller. These examples highlight the capabilities of the developed multiscale model in simulating various form of cavitation. PMID:29720773
Phillips, Carolyn L.; Guo, Hanqi; Peterka, Tom; ...
2016-02-19
In type-II superconductors, the dynamics of magnetic flux vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter field. Earlier, we introduced a method for extracting vortices from the discretized complex order parameter field generated by a large-scale simulation of vortex matter. With this method, at a fixed time step, each vortex [simplistically, a one-dimensional (1D) curve in 3D space] can be represented as a connected graph extracted from the discretized field. Here we extend this method as a function of time as well. A vortex now corresponds to a 2Dmore » space-time sheet embedded in 4D space time that can be represented as a connected graph extracted from the discretized field over both space and time. Vortices that interact by merging or splitting correspond to disappearance and appearance of holes in the connected graph in the time direction. This method of tracking vortices, which makes no assumptions about the scale or behavior of the vortices, can track the vortices with a resolution as good as the discretization of the temporally evolving complex scalar field. In addition, even details of the trajectory between time steps can be reconstructed from the connected graph. With this form of vortex tracking, the details of vortex dynamics in a model of a superconducting materials can be understood in greater detail than previously possible.« less
Real-time dose computation: GPU-accelerated source modeling and superposition/convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacques, Robert; Wong, John; Taylor, Russell
Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3} times were 8.3 and 94 s, respectively, on an AMD (Sunnyvale, CA) Opteron 254 (two cores, 2.8 GHz). Conclusions: The authors have completed a comprehensive, GPU-accelerated dose engine in order to provide a substantial performance gain over CPU based implementations. Real-time dose computation is feasible with the accuracy levels of the superposition/convolution algorithm.« less
Fast automated analysis of strong gravitational lenses with convolutional neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less
Fast automated analysis of strong gravitational lenses with convolutional neural networks
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
2017-08-30
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less
Fast automated analysis of strong gravitational lenses with convolutional neural networks
NASA Astrophysics Data System (ADS)
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
2017-08-01
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
Demonstration of Detection and Ranging Using Solvable Chaos
NASA Technical Reports Server (NTRS)
Corron, Ned J.; Stahl, Mark T.; Blakely, Jonathan N.
2013-01-01
Acoustic experiments demonstrate a novel approach to ranging and detection that exploits the properties of a solvable chaotic oscillator. This nonlinear oscillator includes an ordinary differential equation and a discrete switching condition. The chaotic waveform generated by this hybrid system is used as the transmitted waveform. The oscillator admits an exact analytic solution that can be written as the linear convolution of binary symbols and a single basis function. This linear representation enables coherent reception using a simple analog matched filter and without need for digital sampling or signal processing. An audio frequency implementation of the transmitter and receiver is described. Successful acoustic ranging measurements are presented to demonstrate the viability of the approach.
On the nonlinear development of the most unstable Goertler vortex mode
NASA Technical Reports Server (NTRS)
Denier, James P.; Hall, Philip
1991-01-01
The nonlinear development of the most unstable Gortler vortex mode in boundary layer flows over curved walls is investigated. The most unstable Gortler mode is confined to a viscous wall layer of thickness O(G -1/5) and has spanwise wavelength O(G 11/5); it is, of course, most relevant to flow situations where the Gortler number G is much greater than 1. The nonlinear equations covering the evolution of this mode over an O(G -3/5) streamwise lengthscale are derived and are found to be of a fully nonparallel nature. The solution of these equations is achieved by making use of the numerical scheme used by Hall (1988) for the numerical solution of the nonlinear Gortler equations valid for O(1) Gortler numbers. Thus, the spanwise dependence of the flow is described by a Fourier expansion, whereas the streamwise and normal variations of the flow are dealt with by employing a suitable finite difference discretization of the governing equations. Our calculations demonstrate that, given a suitable initial disturbance, after a brief interval of decay, the energy in all the higher harmonics grows until a singularity is encountered at some downstream position. The structure of the flowfield as this singularity is approached suggests that the singularity is responsible for the vortices, which are initially confined to the thin viscous wall layer, moving away from the wall and into the core of the boundary layer.
NASA Astrophysics Data System (ADS)
Gharekhan, Anita H.; Biswal, Nrusingh C.; Gupta, Sharad; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.
2008-02-01
The statistical and characteristic features of the polarized fluorescence spectra from cancer, normal and benign human breast tissues are studied through wavelet transform and singular value decomposition. The discrete wavelets enabled one to isolate high and low frequency spectral fluctuations, which revealed substantial randomization in the cancerous tissues, not present in the normal cases. In particular, the fluctuations fitted well with a Gaussian distribution for the cancerous tissues in the perpendicular component. One finds non-Gaussian behavior for normal and benign tissues' spectral variations. The study of the difference of intensities in parallel and perpendicular channels, which is free from the diffusive component, revealed weak fluorescence activity in the 630nm domain, for the cancerous tissues. This may be ascribable to porphyrin emission. The role of both scatterers and fluorophores in the observed minor intensity peak for the cancer case is experimentally confirmed through tissue-phantom experiments. Continuous Morlet wavelet also highlighted this domain for the cancerous tissue fluorescence spectra. Correlation in the spectral fluctuation is further studied in different tissue types through singular value decomposition. Apart from identifying different domains of spectral activity for diseased and non-diseased tissues, we found random matrix support for the spectral fluctuations. The small eigenvalues of the perpendicular polarized fluorescence spectra of cancerous tissues fitted remarkably well with random matrix prediction for Gaussian random variables, confirming our observations about spectral fluctuations in the wavelet domain.
Dynamic analysis of nonlinear rotor-housing systems
NASA Technical Reports Server (NTRS)
Noah, Sherif T.
1988-01-01
Nonlinear analysis methods are developed which will enable the reliable prediction of the dynamic behavior of the space shuttle main engine (SSME) turbopumps in the presence of bearing clearances and other local nonlinearities. A computationally efficient convolution method, based on discretized Duhamel and transition matrix integral formulations, is developed for the transient analysis. In the formulation, the coupling forces due to the nonlinearities are treated as external forces acting on the coupled subsystems. Iteration is utilized to determine their magnitudes at each time increment. The method is applied to a nonlinear generic model of the high pressure oxygen turbopump (HPOTP). As compared to the fourth order Runge-Kutta numerical integration methods, the convolution approach proved to be more accurate and more highly efficient. For determining the nonlinear, steady-state periodic responses, an incremental harmonic balance method was also developed. The method was successfully used to determine dominantly harmonic and subharmonic responses fo the HPOTP generic model with bearing clearances. A reduction method similar to the impedance formulation utilized with linear systems is used to reduce the housing-rotor models to their coordinates at the bearing clearances. Recommendations are included for further development of the method, for extending the analysis to aperiodic and chaotic regimes and for conducting critical parameteric studies of the nonlinear response of the current SSME turbopumps.
Some integrable maps and their Hirota bilinear forms
NASA Astrophysics Data System (ADS)
Hone, A. N. W.; Kouloukas, T. E.; Quispel, G. R. W.
2018-01-01
We introduce a two-parameter family of birational maps, which reduces to a family previously found by Demskoi, Tran, van der Kamp and Quispel (DTKQ) when one of the parameters is set to zero. The study of the singularity confinement pattern for these maps leads to the introduction of a tau function satisfying a homogeneous recurrence which has the Laurent property, and the tropical (or ultradiscrete) analogue of this homogeneous recurrence confirms the quadratic degree growth found empirically by Demskoi et al. We prove that the tau function also satisfies two different bilinear equations, each of which is a reduction of the Hirota-Miwa equation (also known as the discrete KP equation, or the octahedron recurrence). Furthermore, these bilinear equations are related to reductions of particular two-dimensional integrable lattice equations, of discrete KdV or discrete Toda type. These connections, as well as the cluster algebra structure of the bilinear equations, allow a direct construction of Poisson brackets, Lax pairs and first integrals for the birational maps. As a consequence of the latter results, we show how each member of the family can be lifted to a system that is integrable in the Liouville sense, clarifying observations made previously in the original DTKQ case.
NASA Astrophysics Data System (ADS)
Abro, Kashif Ali; Memon, Anwar Ahmed; Uqaili, Muhammad Aslam
2018-03-01
This research article is analyzed for the comparative study of RL and RC electrical circuits by employing newly presented Atangana-Baleanu and Caputo-Fabrizio fractional derivatives. The governing ordinary differential equations of RL and RC electrical circuits have been fractionalized in terms of fractional operators in the range of 0 ≤ ξ ≤ 1 and 0 ≤ η ≤ 1. The analytic solutions of fractional differential equations for RL and RC electrical circuits have been solved by using the Laplace transform with its inversions. General solutions have been investigated for periodic and exponential sources by implementing the Atangana-Baleanu and Caputo-Fabrizio fractional operators separately. The investigated solutions have been expressed in terms of simple elementary functions with convolution product. On the basis of newly fractional derivatives with and without singular kernel, the voltage and current have interesting behavior with several similarities and differences for the periodic and exponential sources.
Quantization of Poisson Manifolds from the Integrability of the Modular Function
NASA Astrophysics Data System (ADS)
Bonechi, F.; Ciccoli, N.; Qiu, J.; Tarlini, M.
2014-10-01
We discuss a framework for quantizing a Poisson manifold via the quantization of its symplectic groupoid, combining the tools of geometric quantization with the results of Renault's theory of groupoid C*-algebras. This setting allows very singular polarizations. In particular, we consider the case when the modular function is multiplicatively integrable, i.e., when the space of leaves of the polarization inherits a groupoid structure. If suitable regularity conditions are satisfied, then one can define the quantum algebra as the convolution algebra of the subgroupoid of leaves satisfying the Bohr-Sommerfeld conditions. We apply this procedure to the case of a family of Poisson structures on , seen as Poisson homogeneous spaces of the standard Poisson-Lie group SU( n + 1). We show that a bihamiltonian system on defines a multiplicative integrable model on the symplectic groupoid; we compute the Bohr-Sommerfeld groupoid and show that it satisfies the needed properties for applying Renault theory. We recover and extend Sheu's description of quantum homogeneous spaces as groupoid C*-algebras.
Quantum walks with an anisotropic coin I: spectral theory
NASA Astrophysics Data System (ADS)
Richard, S.; Suzuki, A.; Tiedra de Aldecoa, R.
2018-02-01
We perform the spectral analysis of the evolution operator U of quantum walks with an anisotropic coin, which include one-defect models, two-phase quantum walks, and topological phase quantum walks as special cases. In particular, we determine the essential spectrum of U, we show the existence of locally U-smooth operators, we prove the discreteness of the eigenvalues of U outside the thresholds, and we prove the absence of singular continuous spectrum for U. Our analysis is based on new commutator methods for unitary operators in a two-Hilbert spaces setting, which are of independent interest.
Entanglement-assisted quantum convolutional coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
A spectral approach for the stability analysis of turbulent open-channel flows over granular beds
NASA Astrophysics Data System (ADS)
Camporeale, C.; Canuto, C.; Ridolfi, L.
2012-01-01
A novel Orr-Sommerfeld-like equation for gravity-driven turbulent open-channel flows over a granular erodible bed is here derived, and the linear stability analysis is developed. The whole spectrum of eigenvalues and eigenvectors of the complete generalized eigenvalue problem is computed and analyzed. The fourth-order eigenvalue problem presents singular non-polynomial coefficients with non-homogenous Robin-type boundary conditions that involve first and second derivatives. Furthermore, the Exner condition is imposed at an internal point. We propose a numerical discretization of spectral type based on a single-domain Galerkin scheme. In order to manage the presence of singular coefficients, some properties of Jacobi polynomials have been carefully blended with numerical integration of Gauss-Legendre type. The results show a positive agreement with the classical experimental data and allow one to relate the different types of instability to such parameters as the Froude number, wavenumber, and the roughness scale. The eigenfunctions allow two types of boundary layers to be distinguished, scaling, respectively, with the roughness height and the saltation layer for the bedload sediment transport.
Agreement With Conjoined NPs Reflects Language Experience.
Lorimor, Heidi; Adams, Nora C; Middleton, Erica L
2018-01-01
An important question within psycholinguistic research is whether grammatical features, such as number values on nouns, are probabilistic or discrete. Similarly, researchers have debated whether grammatical specifications are only set for individual lexical items, or whether certain types of noun phrases (NPs) also obtain number valuations at the phrasal level. Through a corpus analysis and an oral production task, we show that conjoined NPs can take both singular and plural verb agreement and that notional number (i.e., the numerosity of the referent of the subject noun phrase) plays an important role in agreement with conjoined NPs. In two written production tasks, we show that participants who are exposed to plural (versus singular or unmarked) agreement with conjoined NPs in a biasing story are more likely to produce plural agreement with conjoined NPs on a subsequent production task. This suggests that, in addition to their sensitivity to notional information, conjoined NPs have probabilistic grammatical specifications that reflect their distributional properties in language. These results provide important evidence that grammatical number reflects language experience, and that this language experience impacts agreement at the phrasal level, and not just the lexical level.
Agreement With Conjoined NPs Reflects Language Experience
Lorimor, Heidi; Adams, Nora C.; Middleton, Erica L.
2018-01-01
An important question within psycholinguistic research is whether grammatical features, such as number values on nouns, are probabilistic or discrete. Similarly, researchers have debated whether grammatical specifications are only set for individual lexical items, or whether certain types of noun phrases (NPs) also obtain number valuations at the phrasal level. Through a corpus analysis and an oral production task, we show that conjoined NPs can take both singular and plural verb agreement and that notional number (i.e., the numerosity of the referent of the subject noun phrase) plays an important role in agreement with conjoined NPs. In two written production tasks, we show that participants who are exposed to plural (versus singular or unmarked) agreement with conjoined NPs in a biasing story are more likely to produce plural agreement with conjoined NPs on a subsequent production task. This suggests that, in addition to their sensitivity to notional information, conjoined NPs have probabilistic grammatical specifications that reflect their distributional properties in language. These results provide important evidence that grammatical number reflects language experience, and that this language experience impacts agreement at the phrasal level, and not just the lexical level. PMID:29725311
NASA Astrophysics Data System (ADS)
Pipkins, Daniel Scott
Two diverse topics of relevance in modern computational mechanics are treated. The first involves the modeling of linear and non-linear wave propagation in flexible, lattice structures. The technique used combines the Laplace Transform with the Finite Element Method (FEM). The procedure is to transform the governing differential equations and boundary conditions into the transform domain where the FEM formulation is carried out. For linear problems, the transformed differential equations can be solved exactly, hence the method is exact. As a result, each member of the lattice structure is modeled using only one element. In the non-linear problem, the method is no longer exact. The approximation introduced is a spatial discretization of the transformed non-linear terms. The non-linear terms are represented in the transform domain by making use of the complex convolution theorem. A weak formulation of the resulting transformed non-linear equations yields a set of element level matrix equations. The trial and test functions used in the weak formulation correspond to the exact solution of the linear part of the transformed governing differential equation. Numerical results are presented for both linear and non-linear systems. The linear systems modeled are longitudinal and torsional rods and Bernoulli-Euler and Timoshenko beams. For non-linear systems, a viscoelastic rod and Von Karman type beam are modeled. The second topic is the analysis of plates and shallow shells under-going finite deflections by the Field/Boundary Element Method. Numerical results are presented for two plate problems. The first is the bifurcation problem associated with a square plate having free boundaries which is loaded by four, self equilibrating corner forces. The results are compared to two existing numerical solutions of the problem which differ substantially.
NASA Astrophysics Data System (ADS)
Levi, Decio; Olver, Peter; Thomova, Zora; Winternitz, Pavel
2009-11-01
The concept of integrability was introduced in classical mechanics in the 19th century for finite dimensional continuous Hamiltonian systems. It was extended to certain classes of nonlinear differential equations in the second half of the 20th century with the discovery of the inverse scattering transform and the birth of soliton theory. Also at the end of the 19th century Lie group theory was invented as a powerful tool for obtaining exact analytical solutions of large classes of differential equations. Together, Lie group theory and integrability theory in its most general sense provide the main tools for solving nonlinear differential equations. Like differential equations, difference equations play an important role in physics and other sciences. They occur very naturally in the description of phenomena that are genuinely discrete. Indeed, they may actually be more fundamental than differential equations if space-time is actually discrete at very short distances. On the other hand, even when treating continuous phenomena described by differential equations it is very often necessary to resort to numerical methods. This involves a discretization of the differential equation, i.e. a replacement of the differential equation by a difference one. Given the well developed and understood techniques of symmetry and integrability for differential equations a natural question to ask is whether it is possible to develop similar techniques for difference equations. The aim is, on one hand, to obtain powerful methods for solving `integrable' difference equations and to establish practical integrability criteria, telling us when the methods are applicable. On the other hand, Lie group methods can be adapted to solve difference equations analytically. Finally, integrability and symmetry methods can be combined with numerical methods to obtain improved numerical solutions of differential equations. The origin of the SIDE meetings goes back to the early 1990s and the first meeting with the name `Symmetries and Integrability of Discrete Equations (SIDE)' was held in Estérel, Québec, Canada. This was organized by D Levi, P Winternitz and L Vinet. After the success of the first meeting the scientific community decided to hold bi-annual SIDE meetings. They were held in 1996 at the University of Kent (UK), 1998 in Sabaudia (Italy), 2000 at the University of Tokyo (Japan), 2002 in Giens (France), 2004 in Helsinki (Finland) and in 2006 at the University of Melbourne (Australia). In 2008 the SIDE 8 meeting was again organized near Montreal, in Ste-Adèle, Québec, Canada. The SIDE 8 International Advisory Committee (also the SIDE steering committee) consisted of Frank Nijhoff, Alexander Bobenko, Basil Grammaticos, Jarmo Hietarinta, Nalini Joshi, Decio Levi, Vassilis Papageorgiou, Junkichi Satsuma, Yuri Suris, Claude Vialet and Pavel Winternitz. The local organizing committee consisted of Pavel Winternitz, John Harnad, Véronique Hussin, Decio Levi, Peter Olver and Luc Vinet. Financial support came from the Centre de Recherches Mathématiques in Montreal and the National Science Foundation (through the University of Minnesota). Proceedings of the first three SIDE meetings were published in the LMS Lecture Note series. Since 2000 the emphasis has been on publishing selected refereed articles in response to a general call for papers issued after the conference. This allows for a wider author base, since the call for papers is not restricted to conference participants. The SIDE topics thus are represented in special issues of Journal of Physics A: Mathematical and General 34 (48) and Journal of Physics A: Mathematical and Theoretical, 40 (42) (SIDE 4 and SIDE 7, respectively), Journal of Nonlinear Mathematical Physics 10 (Suppl. 2) and 12 (Suppl. 2) (SIDE 5 and SIDE 6 respectively). The SIDE 8 meeting was organized around several topics and the contributions to this special issue reflect the diversity presented during the meeting. The papers presented at the SIDE 8 meeting were organized into the following special sessions: geometry of discrete and continuous Painlevé equations; continuous symmetries of discrete equations—theory and computational applications; algebraic aspects of discrete equations; singularity confinement, algebraic entropy and Nevanlinna theory; discrete differential geometry; discrete integrable systems and isomonodromy transformations; special functions as solutions of difference and q-difference equations. This special issue of the journal is organized along similar lines. The first three articles are topical review articles appearing in alphabetical order (by first author). The article by Doliwa and Nieszporski describes the Darboux transformations in a discrete setting, namely for the discrete second order linear problem. The article by Grammaticos, Halburd, Ramani and Viallet concentrates on the integrability of the discrete systems, in particular they describe integrability tests for difference equations such as singularity confinement, algebraic entropy (growth and complexity), and analytic and arithmetic approaches. The topical review by Konopelchenko explores the relationship between the discrete integrable systems and deformations of associative algebras. All other articles are presented in alphabetical order (by first author). The contributions were solicited from all participants as well as from the general scientific community. The contributions published in this special issue can be loosely grouped into several overlapping topics, namely: •Geometry of discrete and continuous Painlevé equations (articles by Spicer and Nijhoff and by Lobb and Nijhoff). •Continuous symmetries of discrete equations—theory and applications (articles by Dorodnitsyn and Kozlov; Levi, Petrera and Scimiterna; Scimiterna; Ste-Marie and Tremblay; Levi and Yamilov; Rebelo and Winternitz). •Yang--Baxter maps (article by Xenitidis and Papageorgiou). •Algebraic aspects of discrete equations (articles by Doliwa and Nieszporski; Konopelchenko; Tsarev and Wolf). •Singularity confinement, algebraic entropy and Nevanlinna theory (articles by Grammaticos, Halburd, Ramani and Viallet; Grammaticos, Ramani and Tamizhmani). •Discrete integrable systems and isomonodromy transformations (article by Dzhamay). •Special functions as solutions of difference and q-difference equations (articles by Atakishiyeva, Atakishiyev and Koornwinder; Bertola, Gekhtman and Szmigielski; Vinet and Zhedanov). •Other topics (articles by Atkinson; Grünbaum Nagai, Kametaka and Watanabe; Nagiyev, Guliyeva and Jafarov; Sahadevan and Uma Maheswari; Svinin; Tian and Hu; Yao, Liu and Zeng). This issue is the result of the collaboration of many individuals. We would like to thank the authors who contributed and everyone else involved in the preparation of this special issue.
NASA Astrophysics Data System (ADS)
El-Shafai, W.; El-Bakary, E. M.; El-Rabaie, S.; Zahran, O.; El-Halawany, M.; Abd El-Samie, F. E.
2017-06-01
Three-Dimensional Multi-View Video (3D-MVV) transmission over wireless networks suffers from Macro-Blocks losses due to either packet dropping or fading-motivated bit errors. Thus, the robust performance of 3D-MVV transmission schemes over wireless channels becomes a recent considerable hot research issue due to the restricted resources and the presence of severe channel errors. The 3D-MVV is composed of multiple video streams shot by several cameras around a single object, simultaneously. Therefore, it is an urgent task to achieve high compression ratios to meet future bandwidth constraints. Unfortunately, the highly-compressed 3D-MVV data becomes more sensitive and vulnerable to packet losses, especially in the case of heavy channel faults. Thus, in this paper, we suggest the application of a chaotic Baker interleaving approach with equalization and convolution coding for efficient Singular Value Decomposition (SVD) watermarked 3D-MVV transmission over an Orthogonal Frequency Division Multiplexing wireless system. Rayleigh fading and Additive White Gaussian Noise are considered in the real scenario of 3D-MVV transmission. The SVD watermarked 3D-MVV frames are primarily converted to their luminance and chrominance components, which are then converted to binary data format. After that, chaotic interleaving is applied prior to the modulation process. It is used to reduce the channel effects on the transmitted bit streams and it also adds a degree of encryption to the transmitted 3D-MVV frames. To test the performance of the proposed framework; several simulation experiments on different SVD watermarked 3D-MVV frames have been executed. The experimental results show that the received SVD watermarked 3D-MVV frames still have high Peak Signal-to-Noise Ratios and watermark extraction is possible in the proposed framework.
Discrete homotopy analysis for optimal trading execution with nonlinear transient market impact
NASA Astrophysics Data System (ADS)
Curato, Gianbiagio; Gatheral, Jim; Lillo, Fabrizio
2016-10-01
Optimal execution in financial markets is the problem of how to trade a large quantity of shares incrementally in time in order to minimize the expected cost. In this paper, we study the problem of the optimal execution in the presence of nonlinear transient market impact. Mathematically such problem is equivalent to solve a strongly nonlinear integral equation, which in our model is a weakly singular Urysohn equation of the first kind. We propose an approach based on Homotopy Analysis Method (HAM), whereby a well behaved initial trading strategy is continuously deformed to lower the expected execution cost. Specifically, we propose a discrete version of the HAM, i.e. the DHAM approach, in order to use the method when the integrals to compute have no closed form solution. We find that the optimal solution is front loaded for concave instantaneous impact even when the investor is risk neutral. More important we find that the expected cost of the DHAM strategy is significantly smaller than the cost of conventional strategies.
Tweaked residual convolutional network for face alignment
NASA Astrophysics Data System (ADS)
Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu
2017-08-01
We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.
3D ductile crack propagation within a polycrystalline microstructure using XFEM
NASA Astrophysics Data System (ADS)
Beese, Steffen; Loehnert, Stefan; Wriggers, Peter
2018-02-01
In this contribution we present a gradient enhanced damage based method to simulate discrete crack propagation in 3D polycrystalline microstructures. Discrete cracks are represented using the eXtended finite element method. The crack propagation criterion and the crack propagation direction for each point along the crack front line is based on the gradient enhanced damage variable. This approach requires the solution of a coupled problem for the balance of momentum and the additional global equation for the gradient enhanced damage field. To capture the discontinuity of the displacements as well as the gradient enhanced damage along the discrete crack, both fields are enriched using the XFEM in combination with level sets. Knowing the crack front velocity, level set methods are used to compute the updated crack geometry after each crack propagation step. The applied material model is a crystal plasticity model often used for polycrystalline microstructures of metals in combination with the gradient enhanced damage model. Due to the inelastic material behaviour after each discrete crack propagation step a projection of the internal variables from the old to the new crack configuration is required. Since for arbitrary crack geometries ill-conditioning of the equation system may occur due to (near) linear dependencies between standard and enriched degrees of freedom, an XFEM stabilisation technique based on a singular value decomposition of the element stiffness matrix is proposed. The performance of the presented methodology to capture crack propagation in polycrystalline microstructures is demonstrated with a number of numerical examples.
An accurate front capturing scheme for tumor growth models with a free boundary limit
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Tang, Min; Wang, Li; Zhou, Zhennan
2018-07-01
We consider a class of tumor growth models under the combined effects of density-dependent pressure and cell multiplication, with a free boundary model as its singular limit when the pressure-density relationship becomes highly nonlinear. In particular, the constitutive law connecting pressure p and density ρ is p (ρ) = m/m-1 ρ m - 1, and when m ≫ 1, the cell density ρ may evolve its support according to a pressure-driven geometric motion with sharp interface along its boundary. The nonlinearity and degeneracy in the diffusion bring great challenges in numerical simulations. Prior to the present paper, there is lack of standard mechanism to numerically capture the front propagation speed as m ≫ 1. In this paper, we develop a numerical scheme based on a novel prediction-correction reformulation that can accurately approximate the front propagation even when the nonlinearity is extremely strong. We show that the semi-discrete scheme naturally connects to the free boundary limit equation as m → ∞. With proper spatial discretization, the fully discrete scheme has improved stability, preserves positivity, and can be implemented without nonlinear solvers. Finally, extensive numerical examples in both one and two dimensions are provided to verify the claimed properties in various applications.
FastMag: Fast micromagnetic simulator for complex magnetic structures (invited)
NASA Astrophysics Data System (ADS)
Chang, R.; Li, S.; Lubarda, M. V.; Livshitz, B.; Lomakin, V.
2011-04-01
A fast micromagnetic simulator (FastMag) for general problems is presented. FastMag solves the Landau-Lifshitz-Gilbert equation and can handle multiscale problems with a high computational efficiency. The simulator derives its high performance from efficient methods for evaluating the effective field and from implementations on massively parallel graphics processing unit (GPU) architectures. FastMag discretizes the computational domain into tetrahedral elements and therefore is highly flexible for general problems. The magnetostatic field is computed via the superposition principle for both volume and surface parts of the computational domain. This is accomplished by implementing efficient quadrature rules and analytical integration for overlapping elements in which the integral kernel is singular. Thus, discretized superposition integrals are computed using a nonuniform grid interpolation method, which evaluates the field from N sources at N collocated observers in O(N) operations. This approach allows handling objects of arbitrary shape, allows easily calculating of the field outside the magnetized domains, does not require solving a linear system of equations, and requires little memory. FastMag is implemented on GPUs with ?> GPU-central processing unit speed-ups of 2 orders of magnitude. Simulations are shown of a large array of magnetic dots and a recording head fully discretized down to the exchange length, with over a hundred million tetrahedral elements on an inexpensive desktop computer.
Boundary particle method for Laplace transformed time fractional diffusion equations
NASA Astrophysics Data System (ADS)
Fu, Zhuo-Jia; Chen, Wen; Yang, Hai-Tian
2013-02-01
This paper develops a novel boundary meshless approach, Laplace transformed boundary particle method (LTBPM), for numerical modeling of time fractional diffusion equations. It implements Laplace transform technique to obtain the corresponding time-independent inhomogeneous equation in Laplace space and then employs a truly boundary-only meshless boundary particle method (BPM) to solve this Laplace-transformed problem. Unlike the other boundary discretization methods, the BPM does not require any inner nodes, since the recursive composite multiple reciprocity technique (RC-MRM) is used to convert the inhomogeneous problem into the higher-order homogeneous problem. Finally, the Stehfest numerical inverse Laplace transform (NILT) is implemented to retrieve the numerical solutions of time fractional diffusion equations from the corresponding BPM solutions. In comparison with finite difference discretization, the LTBPM introduces Laplace transform and Stehfest NILT algorithm to deal with time fractional derivative term, which evades costly convolution integral calculation in time fractional derivation approximation and avoids the effect of time step on numerical accuracy and stability. Consequently, it can effectively simulate long time-history fractional diffusion systems. Error analysis and numerical experiments demonstrate that the present LTBPM is highly accurate and computationally efficient for 2D and 3D time fractional diffusion equations.
Improved Discrete Approximation of Laplacian of Gaussian
NASA Technical Reports Server (NTRS)
Shuler, Robert L., Jr.
2004-01-01
An improved method of computing a discrete approximation of the Laplacian of a Gaussian convolution of an image has been devised. The primary advantage of the method is that without substantially degrading the accuracy of the end result, it reduces the amount of information that must be processed and thus reduces the amount of circuitry needed to perform the Laplacian-of- Gaussian (LOG) operation. Some background information is necessary to place the method in context. The method is intended for application to the LOG part of a process of real-time digital filtering of digitized video data that represent brightnesses in pixels in a square array. The particular filtering process of interest is one that converts pixel brightnesses to binary form, thereby reducing the amount of information that must be performed in subsequent correlation processing (e.g., correlations between images in a stereoscopic pair for determining distances or correlations between successive frames of the same image for detecting motions). The Laplacian is often included in the filtering process because it emphasizes edges and textures, while the Gaussian is often included because it smooths out noise that might not be consistent between left and right images or between successive frames of the same image.
Convolutional coding techniques for data protection
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
Smith, D J; Gaffney, E A; Blake, J R
2007-07-01
We discuss in detail techniques for modelling flows due to finite and infinite arrays of beating cilia. An efficient technique, based on concepts from previous 'singularity models' is described, that is accurate in both near and far-fields. Cilia are modelled as curved slender ellipsoidal bodies by distributing Stokeslet and potential source dipole singularities along their centrelines, leading to an integral equation that can be solved using a simple and efficient discretisation. The computed velocity on the cilium surface is found to compare favourably with the boundary condition. We then present results for two topics of current interest in biology. 1) We present the first theoretical results showing the mechanism by which rotating embryonic nodal cilia produce a leftward flow by a 'posterior tilt,' and track particle motion in an array of three simulated nodal cilia. We find that, contrary to recent suggestions, there is no continuous layer of negative fluid transport close to the ciliated boundary. The mean leftward particle transport is found to be just over 1 mum/s, within experimentally measured ranges. We also discuss the accuracy of models that represent the action of cilia by steady rotlet arrays, in particular, confirming the importance of image systems in the boundary in establishing the far-field fluid transport. Future modelling may lead to understanding of the mechanisms by which morphogen gradients or mechanosensing cilia convert a directional flow to asymmetric gene expression. 2) We develop a more complex and detailed model of flow patterns in the periciliary layer of the airway surface liquid. Our results confirm that shear flow of the mucous layer drives a significant volume of periciliary liquid in the direction of mucus transport even during the recovery stroke of the cilia. Finally, we discuss the advantages and disadvantages of the singularity technique and outline future theoretical and experimental developments required to apply this technique to various other biological problems, particularly in the reproductive system.
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.
Andrews, D.J.
1985-01-01
A numerical boundary integral method, relating slip and traction on a plane in an elastic medium by convolution with a discretized Green function, can be linked to a slip-dependent friction law on the fault plane. Such a method is developed here in two-dimensional plane-strain geometry. Spontaneous plane-strain shear ruptures can make a transition from sub-Rayleigh to near-P propagation velocity. Results from the boundary integral method agree with earlier results from a finite difference method on the location of this transition in parameter space. The methods differ in their prediction of rupture velocity following the transition. The trailing edge of the cohesive zone propagates at the P-wave velocity after the transition in the boundary integral calculations. Refs.
Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD).
Bermúdez Ordoñez, Juan Carlos; Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando
2018-05-16
A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ 1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain.
Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD)
Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando
2018-01-01
A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain. PMID:29772731
The trellis complexity of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Lin, W.
1995-01-01
It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hsiang-Hsu; Taam, Ronald E.; Yen, David C. C., E-mail: yen@math.fju.edu.tw
Investigating the evolution of disk galaxies and the dynamics of proto-stellar disks can involve the use of both a hydrodynamical and a Poisson solver. These systems are usually approximated as infinitesimally thin disks using two-dimensional Cartesian or polar coordinates. In Cartesian coordinates, the calculations of the hydrodynamics and self-gravitational forces are relatively straightforward for attaining second-order accuracy. However, in polar coordinates, a second-order calculation of self-gravitational forces is required for matching the second-order accuracy of hydrodynamical schemes. We present a direct algorithm for calculating self-gravitational forces with second-order accuracy without artificial boundary conditions. The Poisson integral in polar coordinates ismore » expressed in a convolution form and the corresponding numerical complexity is nearly linear using a fast Fourier transform. Examples with analytic solutions are used to verify that the truncated error of this algorithm is of second order. The kernel integral around the singularity is applied to modify the particle method. The use of a softening length is avoided and the accuracy of the particle method is significantly improved.« less
On irregular singularity wave functions and superconformal indices
NASA Astrophysics Data System (ADS)
Buican, Matthew; Nishinaka, Takahiro
2017-09-01
We generalize, in a manifestly Weyl-invariant way, our previous expressions for irregular singularity wave functions in two-dimensional SU(2) q-deformed Yang-Mills theory to SU( N). As an application, we give closed-form expressions for the Schur indices of all ( A N - 1 , A N ( n - 1)-1) Argyres-Douglas (AD) superconformal field theories (SCFTs), thus completing the computation of these quantities for the ( A N , A M ) SCFTs. With minimal effort, our wave functions also give new Schur indices of various infinite sets of "Type IV" AD theories. We explore the discrete symmetries of these indices and also show how highly intricate renormalization group (RG) flows from isolated theories and conformal manifolds in the ultraviolet to isolated theories and (products of) conformal manifolds in the infrared are encoded in these indices. We compare our flows with dimensionally reduced flows via a simple "monopole vev RG" formalism. Finally, since our expressions are given in terms of concise Lie algebra data, we speculate on extensions of our results that might be useful for probing the existence of hypothetical SCFTs based on other Lie algebras. We conclude with a discussion of some open problems.
Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data
NASA Technical Reports Server (NTRS)
Johnson, Marty E.; Lalime, Aimee L.; Grosveld, Ferdinand W.; Rizzi, Stephen A.; Sullivan, Brenda M.
2003-01-01
Applying binaural simulation techniques to structural acoustic data can be very computationally intensive as the number of discrete noise sources can be very large. Typically, Head Related Transfer Functions (HRTFs) are used to individually filter the signals from each of the sources in the acoustic field. Therefore, creating a binaural simulation implies the use of potentially hundreds of real time filters. This paper details two methods of reducing the number of real-time computations required by: (i) using the singular value decomposition (SVD) to reduce the complexity of the HRTFs by breaking them into dominant singular values and vectors and (ii) by using equivalent source reduction (ESR) to reduce the number of sources to be analyzed in real-time by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. The ESR and SVD reduction methods can be combined to provide an estimated computation time reduction of 99.4% for the structural acoustic data tested. In addition, preliminary tests have shown that there is a 97% correlation between the results of the combined reduction methods and the results found with the current binaural simulation techniques
Improving energy efficiency in handheld biometric applications
NASA Astrophysics Data System (ADS)
Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.
2012-06-01
With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.
NASA Astrophysics Data System (ADS)
QingJie, Wei; WenBin, Wang
2017-06-01
In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval
Deep multi-scale convolutional neural network for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Zhang, Feng-zhe; Yang, Xia
2018-04-01
In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
Alsmadi, Othman M K; Abo-Hammour, Zaer S
2015-01-01
A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Arrieta-Camacho, Juan José; Biegler, Lorenz T
2005-12-01
Real time optimal guidance is considered for a class of low thrust spacecraft. In particular, nonlinear model predictive control (NMPC) is utilized for computing the optimal control actions required to transfer a spacecraft from a low Earth orbit to a mission orbit. The NMPC methodology presented is able to cope with unmodeled disturbances. The dynamics of the transfer are modeled using a set of modified equinoctial elements because they do not exhibit singularities for zero inclination and zero eccentricity. The idea behind NMPC is the repeated solution of optimal control problems; at each time step, a new control action is computed. The optimal control problem is solved using a direct method-fully discretizing the equations of motion. The large scale nonlinear program resulting from the discretization procedure is solved using IPOPT--a primal-dual interior point algorithm. Stability and robustness characteristics of the NMPC algorithm are reviewed. A numerical example is presented that encourages further development of the proposed methodology: the transfer from low-Earth orbit to a molniya orbit.
The cosmological constant as an eigenvalue of a Sturm-Liouville problem
NASA Astrophysics Data System (ADS)
Astashenok, Artyom V.; Elizalde, Emilio; Yurov, Artyom V.
2014-01-01
It is observed that one of Einstein-Friedmann's equations has formally the aspect of a Sturm-Liouville problem, and that the cosmological constant, Λ, plays thereby the role of spectral parameter (what hints to its connection with the Casimir effect). The subsequent formulation of appropriate boundary conditions leads to a set of admissible values for Λ, considered as eigenvalues of the corresponding linear operator. Simplest boundary conditions are assumed, namely that the eigenfunctions belong to L 2 space, with the result that, when all energy conditions are satisfied, they yield a discrete spectrum for Λ>0 and a continuous one for Λ<0. A very interesting situation is seen to occur when the discrete spectrum contains only one point: then, there is the possibility to obtain appropriate cosmological conditions without invoking the anthropic principle. This possibility is shown to be realized in cyclic cosmological models, provided the potential of the matter field is similar to the potential of the scalar field. The dynamics of the universe in this case contains a sudden future singularity.
Grid Convergence for Turbulent Flows(Invited)
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Rumsey, Christopher L.; Schwoppe, Axel
2015-01-01
A detailed grid convergence study has been conducted to establish accurate reference solutions corresponding to the one-equation linear eddy-viscosity Spalart-Allmaras turbulence model for two dimensional turbulent flows around the NACA 0012 airfoil and a flat plate. The study involved three widely used codes, CFL3D (NASA), FUN3D (NASA), and TAU (DLR), and families of uniformly refined structured grids that differ in the grid density patterns. Solutions computed by different codes on different grid families appear to converge to the same continuous limit, but exhibit different convergence characteristics. The grid resolution in the vicinity of geometric singularities, such as a sharp trailing edge, is found to be the major factor affecting accuracy and convergence of discrete solutions, more prominent than differences in discretization schemes and/or grid elements. The results reported for these relatively simple turbulent flows demonstrate that CFL3D, FUN3D, and TAU solutions are very accurate on the finest grids used in the study, but even those grids are not sufficient to conclusively establish an asymptotic convergence order.
Quantum gravitational collapse as a Dirac particle on the half line
NASA Astrophysics Data System (ADS)
Hassan, Syed Moeez; Husain, Viqar; Ziprick, Jonathan
2018-05-01
We show that the quantum dynamics of a thin spherical shell in general relativity is equivalent to the Coulomb-Dirac equation on the half line. The Hamiltonian has a one-parameter family of self-adjoint extensions with a discrete energy spectrum |E |
Continuous analogues of matrix factorizations
Townsend, Alex; Trefethen, Lloyd N.
2015-01-01
Analogues of singular value decomposition (SVD), QR, LU and Cholesky factorizations are presented for problems in which the usual discrete matrix is replaced by a ‘quasimatrix’, continuous in one dimension, or a ‘cmatrix’, continuous in both dimensions. Two challenges arise: the generalization of the notions of triangular structure and row and column pivoting to continuous variables (required in all cases except the SVD, and far from obvious), and the convergence of the infinite series that define the cmatrix factorizations. Our generalizations of triangularity and pivoting are based on a new notion of a ‘triangular quasimatrix’. Concerning convergence of the series, we prove theorems asserting convergence provided the functions involved are sufficiently smooth. PMID:25568618
Couple stresses and the fracture of rock.
Atkinson, Colin; Coman, Ciprian D; Aldazabal, Javier
2015-03-28
An assessment is made here of the role played by the micropolar continuum theory on the cracked Brazilian disc test used for determining rock fracture toughness. By analytically solving the corresponding mixed boundary-value problems and employing singular-perturbation arguments, we provide closed-form expressions for the energy release rate and the corresponding stress-intensity factors for both mode I and mode II loading. These theoretical results are augmented by a set of fracture toughness experiments on both sandstone and marble rocks. It is further shown that the morphology of the fracturing process in our centrally pre-cracked circular samples correlates very well with discrete element simulations. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
A recent paper is generalized to a case where the spatial region is taken in R(sup 3). The region is assumed to be a thin body, such as a panel on the wing or fuselage of an aerospace vehicle. The traditional h- as well as hp-finite element methods are applied to the surface defined in the x - y variables, while, through the thickness, the technique of the p-element is employed. Time and spatial discretization scheme based upon an assumption of certain weak singularity of double vertical line u(sub t) double vertical line 2, is used to derive an optimal a priori error estimate for the current method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Qiang, E-mail: jyanghkbu@gmail.com; Yang, Jiang, E-mail: qd2125@columbia.edu
This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simplemore » ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge–Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge–Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen–Cahn equations, nonlocal Cahn–Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.« less
A fast D.F.T. algorithm using complex integer transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1978-01-01
Winograd (1976) has developed a new class of algorithms which depend heavily on the computation of a cyclic convolution for computing the conventional DFT (discrete Fourier transform); this new algorithm, for a few hundred transform points, requires substantially fewer multiplications than the conventional FFT algorithm. Reed and Truong have defined a special class of finite Fourier-like transforms over GF(q squared), where q = 2 to the p power minus 1 is a Mersenne prime for p = 2, 3, 5, 7, 13, 17, 19, 31, 61. In the present paper it is shown that Winograd's algorithm can be combined with the aforementioned Fourier-like transform to yield a new algorithm for computing the DFT. A fast method for accurately computing the DFT of a sequence of complex numbers of very long transform-lengths is thus obtained.
Dickinson, R J
1985-04-01
In a recent paper, Vaknine and Lorenz discuss the merits of lateral deconvolution of demodulated B-scans. While this technique will decrease the lateral blurring of single discrete targets, such as the diaphragm in their figure 3, it is inappropriate to apply the method to the echoes arising from inhomogeneous structures such as soft tissue. In this latter case, the echoes from individual scatterers within the resolution cell of the transducer interfere to give random fluctuations in received echo amplitude termed speckle. Although his process can be modeled as a linear convolution similar to that of conventional image formation theory, the process of demodulation is a nonlinear process which loses the all-important phase information, and prevents the subsequent restoration of the image by Wiener filtering, itself a linear process.
Naqvi, Shahid A; D'Souza, Warren D
2005-04-01
Current methods to calculate dose distributions with organ motion can be broadly classified as "dose convolution" and "fluence convolution" methods. In the former, a static dose distribution is convolved with the probability distribution function (PDF) that characterizes the motion. However, artifacts are produced near the surface and around inhomogeneities because the method assumes shift invariance. Fluence convolution avoids these artifacts by convolving the PDF with the incident fluence instead of the patient dose. In this paper we present an alternative method that improves the accuracy, generality as well as the speed of dose calculation with organ motion. The algorithm starts by sampling an isocenter point from a parametrically defined space curve corresponding to the patient-specific motion trajectory. Then a photon is sampled in the linac head and propagated through the three-dimensional (3-D) collimator structure corresponding to a particular MLC segment chosen randomly from the planned IMRT leaf sequence. The photon is then made to interact at a point in the CT-based simulation phantom. Randomly sampled monoenergetic kernel rays issued from this point are then made to deposit energy in the voxels. Our method explicitly accounts for MLC-specific effects (spectral hardening, tongue-and-groove, head scatter) as well as changes in SSD with isocentric displacement, assuming that the body moves rigidly with the isocenter. Since the positions are randomly sampled from a continuum, there is no motion discretization, and the computation takes no more time than a static calculation. To validate our method, we obtained ten separate film measurements of an IMRT plan delivered on a phantom moving sinusoidally, with each fraction starting with a random phase. For 2 cm motion amplitude, we found that a ten-fraction average of the film measurements gave an agreement with the calculated infinite fraction average to within 2 mm in the isodose curves. The results also corroborate the existing notion that the interfraction dose variability due to the interplay between the MLC motion and breathing motion averages out over typical multifraction treatments. Simulation with motion waveforms more representative of real breathing indicate that the motion can produce penumbral spreading asymmetric about the static dose distributions. Such calculations can help a clinician decide to use, for example, a larger margin in the superior direction than in the inferior direction. In the paper we demonstrate that a 15 min run on a single CPU can readily illustrate the effect of a patient-specific breathing waveform, and can guide the physician in making informed decisions about margin expansion and dose escalation.
Image quality of mixed convolution kernel in thoracic computed tomography.
Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar
2016-11-01
The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.
2001-09-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes
NASA Technical Reports Server (NTRS)
Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.
1976-01-01
The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.
Surface tension and negative pressure interior of a non-singular ‘black hole’
NASA Astrophysics Data System (ADS)
Mazur, Pawel O.; Mottola, Emil
2015-11-01
The constant density interior Schwarzschild solution for a static, spherically symmetric collapsed star has a divergent pressure when its radius R≤slant \\frac{9}{8}{R}s=\\frac{9}{4}{GM}. We show that this divergence is integrable, and induces a non-isotropic transverse stress with a finite redshifted surface tension on a spherical surface of radius {R}0=3R\\sqrt{1-\\frac{8}{9}\\frac{R }{{R}s}}. For r\\lt {R}0 the interior Schwarzschild solution exhibits negative pressure. When R={R}s, the surface is localized at the Schwarzschild radius itself, {R}0={R}s, and the solution has constant negative pressure p=-\\bar{ρ } everywhere in the interior r\\lt {R}s, thereby describing a gravitational condensate star, a fully collapsed non-singular state already inherent in and predicted by classical general relativity. The redshifted surface tension of the condensate star surface is given by {τ }s={{Δ }}κ /8π G, where {{Δ }}κ ={κ }+-{κ }-=2{κ }+=1/{R}s is the difference of equal and opposite surface gravities between the exterior and interior Schwarzschild solutions. The First Law, {{d}}M={{d}}{E}V+{τ }s {{d}}A is recognized as a purely mechanical classical relation at zero temperature and zero entropy, describing the volume energy and surface energy change respectively. The Schwarzschild time t of such a non-singular gravitational condensate star is a global time, fully consistent with unitary time evolution in quantum theory. A clear observational test of gravitational condensate stars with a physical surface versus black holes is the discrete surface modes of oscillation which should be detectable by their gravitational wave signatures.
On the deep structure of the blowing-up of curve singularities
NASA Astrophysics Data System (ADS)
Elias, Juan
2001-09-01
Let C be a germ of curve singularity embedded in (kn, 0). It is well known that the blowing-up of C centred on its closed ring, Bl(C), is a finite union of curve singularities. If C is reduced we can iterate this process and, after a finite number of steps, we find only non-singular curves. This is the desingularization process. The main idea of this paper is to linearize the blowing-up of curve singularities Bl(C) [rightward arrow] C. We perform this by studying the structure of [script O]Bl(C)/[script O]C as W-module, where W is a discrete valuation ring contained in [script O]C. Since [script O]Bl(C)/[script O]C is a torsion W-module, its structure is determined by the invariant factors of [script O]C in [script O]Bl(C). The set of invariant factors is called in this paper as the set of micro-invariants of C (see Definition 1·2).In the first section we relate the micro-invariants of C to the Hilbert function of C (Proposition 1·3), and we show how to compute them from the Hilbert function of some quotient of [script O]C (see Proposition 1·4).The main result of this paper is Theorem 3·3 where we give upper bounds of the micro-invariants in terms of the regularity, multiplicity and embedding dimension. As a corollary we improve and we recover some results of [6]. These bounds can be established as a consequence of the study of the Hilbert function of a filtration of ideals g = {g[r,i+1]}i [gt-or-equal, slanted] 0 of the tangent cone of [script O]C (see Section 2). The main property of g is that the ideals g[r,i+1] have initial degree bigger than the Castelnuovo-Mumford regularity of the tangent cone of [script O]C.Section 4 is devoted to computation the micro-invariants of branches; we show how to compute them from the semigroup of values of C and Bl(C) (Proposition 4·3). The case of monomial curve singularities is especially studied; we end Section 4 with some explicit computations.In the last section we study some geometric properties of C that can be deduced from special values of the micro-invariants, and we specially study the relationship of the micro-invariants with the Hilbert function of [script O]Bl(C). We end the paper studying the natural equisingularity criteria that can be defined from the micro-invariants and its relationship with some of the known equisingularity criteria.
Enhanced online convolutional neural networks for object tracking
NASA Astrophysics Data System (ADS)
Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen
2018-04-01
In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.
Numerical time-domain electromagnetics based on finite-difference and convolution
NASA Astrophysics Data System (ADS)
Lin, Yuanqu
Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving sparsely-populated problems. The scheme uses a discrete Green's function (DGF) on the FDTD lattice to truncate the local subregions, and thus reduces reflection error on the local boundary. A continuous Green's function (CGF) is implemented to pass the influence of external fields into each FDTD region which mitigates the numerical dispersion and anisotropy of standard FDTD. Numerical results will illustrate the accuracy and stability of the proposed techniques.
Fluctuation-controlled front propagation
NASA Astrophysics Data System (ADS)
Ridgway, Douglas Thacher
1997-09-01
A number of fundamental pattern-forming systems are controlled by fluctuations at the front. These problems involve the interaction of an infinite dimensional probability distribution with a strongly nonlinear, spatially extended pattern-forming system. We have examined fluctuation-controlled growth in the context of the specific problems of diffusion-limited growth and biological evolution. Mean field theory of diffusion-limited growth exhibits a finite time singularity. Near the leading edge of a diffusion-limited front, this leads to acceleration and blowup. This may be resolved, in an ad hoc manner, by introducing a cutoff below which growth is weakened or eliminated (8). This model, referred to as the BLT model, captures a number of qualitative features of global pattern formation in diffusion-limited aggregation: contours of the mean field match contours of averaged particle density in simulation, and the modified mean field theory can form dendritic features not possible in the naive mean field theory. The morphology transition between dendritic and non-dendritic global patterns requires that BLT fronts have a Mullins-Sekerka instability of the wavefront shape, in order to form concave patterns. We compute the stability of BLT fronts numerically, and compare the results to fronts without a cutoff. A significant morphological instability of the BLT fronts exists, with a dominant wavenumber on the scale of the front width. For standard mean field fronts, no instability is found. The naive and ad hoc mean field theories are continuum-deterministic models intended to capture the behavior of a discrete stochastic system. A transformation which maps discrete systems into a continuum model with a singular multiplicative noise is known, however numerical simulations of the continuum stochastic system often give mean field behavior instead of the critical behavior of the discrete system. We have found a new interpretation of the singular noise, based on maintaining the symmetry of the absorbing state, but which is unsuccessful at capturing the behavior of diffusion-limited growth. In an effort to find a simpler model system, we turned to modelling fitness increases in evolution. The work was motivated by an experiment on vesicular stomatitis virus, a short (˜9600bp) single-stranded RNA virus. A highly bottlenecked viral population increases in fitness rapidly until a certain point, after which the fitness increases at a slower rate. This is well modeled by a constant population reproducing and mutating on a smooth fitness landscape. Mean field theory of this system displays the same infinite propagation velocity blowup as mean field diffusion-limited aggregation. However, we have been able to make progress on a number of fronts. One is solving systems of moment equations, where a hierarchy of moments is truncated arbitrarily at some level. Good results for front propagation velocity are found with just two moments, corresponding to inclusion of the basic finite population clustering effect ignored by mean field theory. In addition, for small mutation rates, most of the population will be entirely on a single site or two adjacent sites, and the density of these cases can be described and solved. (Abstract shortened by UMI.)
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Hunter, Craig A.
1999-01-01
An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.
Experimental study of current loss and plasma formation in the Z machine post-hole convolute
NASA Astrophysics Data System (ADS)
Gomez, M. R.; Gilgenbach, R. M.; Cuneo, M. E.; Jennings, C. A.; McBride, R. D.; Waisman, E. M.; Hutsel, B. T.; Stygar, W. A.; Rose, D. V.; Maron, Y.
2017-01-01
The Z pulsed-power generator at Sandia National Laboratories drives high energy density physics experiments with load currents of up to 26 MA. Z utilizes a double post-hole convolute to combine the current from four parallel magnetically insulated transmission lines into a single transmission line just upstream of the load. Current loss is observed in most experiments and is traditionally attributed to inefficient convolute performance. The apparent loss current varies substantially for z-pinch loads with different inductance histories; however, a similar convolute impedance history is observed for all load types. This paper details direct spectroscopic measurements of plasma density, temperature, and apparent and actual plasma closure velocities within the convolute. Spectral measurements indicate a correlation between impedance collapse and plasma formation in the convolute. Absorption features in the spectra show the convolute plasma consists primarily of hydrogen, which likely forms from desorbed electrode contaminant species such as H2O , H2 , and hydrocarbons. Plasma densities increase from 1 ×1016 cm-3 (level of detectability) just before peak current to over 1 ×1017 cm-3 at stagnation (tens of ns later). The density seems to be highest near the cathode surface, with an apparent cathode to anode plasma velocity in the range of 35 - 50 cm /μ s . Similar plasma conditions and convolute impedance histories are observed in experiments with high and low losses, suggesting that losses are driven largely by load dynamics, which determine the voltage on the convolute.
Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D
2012-09-01
It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed by minimizing the nuclear norm of difference between the sampled image and the recovered image. It has been illustrated that this algorithm improves the ability of previous image reconstruction algorithms to remove noise artifacts while significantly improving the quality of MRI recovery.
2015-12-15
Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network ... Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their...detection accuracy and speed on the fine-grained Caltech UCSD bird dataset (Wah et al., 2011). Recently, Convolutional Neural Networks (CNNs), a deep
Understanding perception of active noise control system through multichannel EEG analysis.
Bagha, Sangeeta; Tripathy, R K; Nanda, Pranati; Preetam, C; Das, Debi Prasad
2018-06-01
In this Letter, a method is proposed to investigate the effect of noise with and without active noise control (ANC) on multichannel electroencephalogram (EEG) signal. The multichannel EEG signal is recorded during different listening conditions such as silent, music, noise, ANC with background noise and ANC with both background noise and music. The multiscale analysis of EEG signal of each channel is performed using the discrete wavelet transform. The multivariate multiscale matrices are formulated based on the sub-band signals of each EEG channel. The singular value decomposition is applied to the multivariate matrices of multichannel EEG at significant scales. The singular value features at significant scales and the extreme learning machine classifier with three different activation functions are used for classification of multichannel EEG signal. The experimental results demonstrate that, for ANC with noise and ANC with noise and music classes, the proposed method has sensitivity values of 75.831% ( p < 0.001 ) and 99.31% ( p < 0.001 ), respectively. The method has an accuracy value of 83.22% for the classification of EEG signal with music and ANC with music as stimuli. The important finding of this study is that by the introduction of ANC, music can be better perceived by the human brain.
Eigensensitivity analysis of rotating clamped uniform beams with the asymptotic numerical method
NASA Astrophysics Data System (ADS)
Bekhoucha, F.; Rechak, S.; Cadou, J. M.
2016-12-01
In this paper, free vibrations of a rotating clamped Euler-Bernoulli beams with uniform cross section are studied using continuation method, namely asymptotic numerical method. The governing equations of motion are derived using Lagrange's method. The kinetic and strain energy expression are derived from Rayleigh-Ritz method using a set of hybrid variables and based on a linear deflection assumption. The derived equations are transformed in two eigenvalue problems, where the first is a linear gyroscopic eigenvalue problem and presents the coupled lagging and stretch motions through gyroscopic terms. While the second is standard eigenvalue problem and corresponds to the flapping motion. Those two eigenvalue problems are transformed into two functionals treated by continuation method, the Asymptotic Numerical Method. New method proposed for the solution of the linear gyroscopic system based on an augmented system, which transforms the original problem to a standard form with real symmetric matrices. By using some techniques to resolve these singular problems by the continuation method, evolution curves of the natural frequencies against dimensionless angular velocity are determined. At high angular velocity, some singular points, due to the linear elastic assumption, are computed. Numerical tests of convergence are conducted and the obtained results are compared to the exact values. Results obtained by continuation are compared to those computed with discrete eigenvalue problem.
Witoonchart, Peerajak; Chongstitvatana, Prabhas
2017-08-01
In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Umar, A.; Yusau, B.; Ghandi, B. M.
2007-01-01
In this note, we introduce and discuss convolutions of two series. The idea is simple and can be introduced to higher secondary school classes, and has the potential of providing a good background for the well known convolution of function.
Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network
1989-08-01
Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error
2008-09-01
Convolutional Encoder Block Diagram of code rate 1 2 r = and...most commonly used along with block codes . They were introduced in 1955 by Elias [7]. Convolutional codes are characterized by the code rate kr n... convolutional code for 1 2 r = and = 3κ , namely [7 5], is used. Figure 2 Convolutional Encoder Block Diagram of code rate 1 2 r = and
Pérez-Arancibia, Carlos; Bruno, Oscar P
2014-08-01
This paper presents high-order integral equation methods for the evaluation of electromagnetic wave scattering by dielectric bumps and dielectric cavities on perfectly conducting or dielectric half-planes. In detail, the algorithms introduced in this paper apply to eight classical scattering problems, namely, scattering by a dielectric bump on a perfectly conducting or a dielectric half-plane, and scattering by a filled, overfilled, or void dielectric cavity on a perfectly conducting or a dielectric half-plane. In all cases field representations based on single-layer potentials for appropriately chosen Green functions are used. The numerical far fields and near fields exhibit excellent convergence as discretizations are refined-even at and around points where singular fields and infinite currents exist.
Soluble Model Fluids with Complete Scaling and Yang-Yang Features
NASA Astrophysics Data System (ADS)
Cerdeiriña, Claudio A.; Orkoulas, Gerassimos; Fisher, Michael E.
2016-01-01
Yang-Yang (YY) and singular diameter critical anomalies arise in exactly soluble compressible cell gas (CCG) models that obey complete scaling with pressure mixing. Thus, on the critical isochore ρ =ρc , C˜ μ≔-T d2μ /d T2 diverges as |t |-α when t ∝T -Tc→0- while ρd-ρc˜|t |2β where ρd(T )=1/2 [ρliq+ρgas] . When the discrete local CCG cell volumes fluctuate freely, the YY ratio Rμ=C˜μ/CV may take any value -∞
A fast collocation method for a variable-coefficient nonlocal diffusion model
NASA Astrophysics Data System (ADS)
Wang, Che; Wang, Hong
2017-02-01
We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.
A vortex wake capturing method for potential flow calculations
NASA Technical Reports Server (NTRS)
Murman, E. M.; Stremel, P. M.
1982-01-01
A method is presented for modifying finite difference solutions of the potential equation to include the calculation of non-planar vortex wake features. The approach is an adaptation of Baker's 'cloud in cell' algorithm developed for the stream function-vorticity equations. The vortex wake is tracked in a Lagrangian frame of reference as a group of discrete vortex filaments. These are distributed to the Eulerian mesh system on which the velocity is calculated by a finite difference solution of the potential equation. An artificial viscosity introduced by the finite difference equations removes the singular nature of the vortex filaments. Computed examples are given for the two-dimensional time dependent roll-up of vortex wakes generated by wings with different spanwise loading distributions.
NASA Technical Reports Server (NTRS)
Lee, F. C. Y.; Wilson, T. G.
1982-01-01
The present investigation is concerned with an important class of power conditioning networks, taking into account self-oscillating dc-to-square-wave transistor inverters. The considered circuits are widely used both as the principal power converting and processing means in many systems and as low-power analog-to-discrete-time converters for controlling the switching of the output-stage semiconductors in a variety of power conditioning systems. Aspects of piecewise-linear modeling are discussed, taking into consideration component models, and an equivalent-circuit model. Questions of singular point analysis and state plane representation are also investigated, giving attention to limit cycles, starting circuits, the region of attraction, a hard oscillator, and a soft oscillator.
Automated retinal layer segmentation and characterization
NASA Astrophysics Data System (ADS)
Luisi, Jonathan; Briley, David; Boretsky, Adam; Motamedi, Massoud
2014-05-01
Spectral Domain Optical Coherence Tomography (SD-OCT) is a valuable diagnostic tool in both clinical and research settings. The depth-resolved intensity profiles generated by light backscattered from discrete layers of the retina provide a non-invasive method of investigating progressive diseases and injury within the eye. This study demonstrates the application of steerable convolution filters capable of automatically separating gradient orientations to identify edges and delineate tissue boundaries. The edge maps were recombined to measure thickness of individual retinal layers. This technique was successfully applied to longitudinally monitor changes in retinal morphology in a mouse model of laser-induced choroidal neovascularization (CNV) and human data from age-related macular degeneration patients. The steerable filters allow for direct segmentation of noisy images, while novel recombination of weaker segmentations allow for denoising post-segmentation. The segmentation before denoising strategy allows the rapid detection of thin retinal layers even under suboptimal imaging conditions.
Protograph-Based Raptor-Like Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.
2014-01-01
Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.
Convolution of large 3D images on GPU and its decomposition
NASA Astrophysics Data System (ADS)
Karas, Pavel; Svoboda, David
2011-12-01
In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.
Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.
2017-11-15
In Hezaveh et al. (2017) we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data,more » as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single hyperparameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that neural networks can be a fast alternative to Monte Carlo Markov Chains for parameter uncertainty estimation in many practical applications, allowing more than seven orders of magnitude improvement in speed.« less
NASA Astrophysics Data System (ADS)
Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.
2017-11-01
In Hezaveh et al. we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational-lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data, as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single variational parameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that the application of approximate Bayesian neural networks to astrophysical modeling problems can be a fast alternative to Monte Carlo Markov Chains, allowing orders of magnitude improvement in speed.
Developing an Accurate CFD Based Gust Model for the Truss Braced Wing Aircraft
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2013-01-01
The increased flexibility of long endurance aircraft having high aspect ratio wings necessitates attention to gust response and perhaps the incorporation of gust load alleviation. The design of civil transport aircraft with a strut or truss-braced high aspect ratio wing furthermore requires gust response analysis in the transonic cruise range. This requirement motivates the use of high fidelity nonlinear computational fluid dynamics (CFD) for gust response analysis. This paper presents the development of a CFD based gust model for the truss braced wing aircraft. A sharp-edged gust provides the gust system identification. The result of the system identification is several thousand time steps of instantaneous pressure coefficients over the entire vehicle. This data is filtered and downsampled to provide the snapshot data set from which a reduced order model is developed. A stochastic singular value decomposition algorithm is used to obtain a proper orthogonal decomposition (POD). The POD model is combined with a convolution integral to predict the time varying pressure coefficient distribution due to a novel gust profile. Finally the unsteady surface pressure response of the truss braced wing vehicle to a one-minus-cosine gust, simulated using the reduced order model, is compared with the full CFD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.
In Hezaveh et al. (2017) we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data,more » as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single hyperparameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that neural networks can be a fast alternative to Monte Carlo Markov Chains for parameter uncertainty estimation in many practical applications, allowing more than seven orders of magnitude improvement in speed.« less
Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B
2013-03-01
Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.
Indirect iterative learning control for a discrete visual servo without a camera-robot model.
Jiang, Ping; Bamforth, Leon C A; Feng, Zuren; Baruch, John E F; Chen, YangQuan
2007-08-01
This paper presents a discrete learning controller for vision-guided robot trajectory imitation with no prior knowledge of the camera-robot model. A teacher demonstrates a desired movement in front of a camera, and then, the robot is tasked to replay it by repetitive tracking. The imitation procedure is considered as a discrete tracking control problem in the image plane, with an unknown and time-varying image Jacobian matrix. Instead of updating the control signal directly, as is usually done in iterative learning control (ILC), a series of neural networks are used to approximate the unknown Jacobian matrix around every sample point in the demonstrated trajectory, and the time-varying weights of local neural networks are identified through repetitive tracking, i.e., indirect ILC. This makes repetitive segmented training possible, and a segmented training strategy is presented to retain the training trajectories solely within the effective region for neural network approximation. However, a singularity problem may occur if an unmodified neural-network-based Jacobian estimation is used to calculate the robot end-effector velocity. A new weight modification algorithm is proposed which ensures invertibility of the estimation, thus circumventing the problem. Stability is further discussed, and the relationship between the approximation capability of the neural network and the tracking accuracy is obtained. Simulations and experiments are carried out to illustrate the validity of the proposed controller for trajectory imitation of robot manipulators with unknown time-varying Jacobian matrices.
NASA Astrophysics Data System (ADS)
Zhao, J. M.; Tan, J. Y.; Liu, L. H.
2013-01-01
A new second order form of radiative transfer equation (named MSORTE) is proposed, which overcomes the singularity problem of a previously proposed second order radiative transfer equation [J.E. Morel, B.T. Adams, T. Noh, J.M. McGhee, T.M. Evans, T.J. Urbatsch, Spatial discretizations for self-adjoint forms of the radiative transfer equations, J. Comput. Phys. 214 (1) (2006) 12-40 (where it was termed SAAI), J.M. Zhao, L.H. Liu, Second order radiative transfer equation and its properties of numerical solution using finite element method, Numer. Heat Transfer B 51 (2007) 391-409] in dealing with inhomogeneous media where some locations have very small/zero extinction coefficient. The MSORTE contains a naturally introduced diffusion (or second order) term which provides better numerical property than the classic first order radiative transfer equation (RTE). The stability and convergence characteristics of the MSORTE discretized by central difference scheme is analyzed theoretically, and the better numerical stability of the second order form radiative transfer equations than the RTE when discretized by the central difference type method is proved. A collocation meshless method is developed based on the MSORTE to solve radiative transfer in inhomogeneous media. Several critical test cases are taken to verify the performance of the presented method. The collocation meshless method based on the MSORTE is demonstrated to be capable of stably and accurately solve radiative transfer in strongly inhomogeneous media, media with void region and even with discontinuous extinction coefficient.
Development and application of deep convolutional neural network in target detection
NASA Astrophysics Data System (ADS)
Jiang, Xiaowei; Wang, Chunping; Fu, Qiang
2018-04-01
With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.
The comparison between SVD-DCT and SVD-DWT digital image watermarking
NASA Astrophysics Data System (ADS)
Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas
2018-03-01
With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.
NASA Astrophysics Data System (ADS)
Tallapragada, P.; Kelly, S. D.
2015-11-01
Diverse mechanisms for animal locomotion in fluids rely on vortex shedding to generate propulsive forces. This is a complex phenomenon that depends essentially on fluid viscosity, but its influence can be modeled in an inviscid setting by introducing localized velocity constraints to systems comprising solid bodies interacting with ideal fluids. In the present paper, we invoke an unsteady version of the Kutta condition from inviscid airfoil theory and a more primitive stagnation condition to model vortex shedding from a geometrically contrasting pair of free planar bodies representing idealizations of swimming animals or robotic vehicles. We demonstrate with simulations that these constraints are sufficient to enable both bodies to propel themselves with very limited actuation. The solitary actuator in each case is a momentum wheel internal to the body, underscoring the symmetry-breaking role played by vortex shedding in converting periodic variations in a generic swimmer's angular momentum to forward locomotion. The velocity constraints are imposed discretely in time, resulting in the shedding of discrete vortices; we observe the roll-up of these vortices into distinctive wake structures observed in viscous models and physical experiments.
An analysis of the vertical structure equation for arbitrary thermal profiles
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.; Dee, Dick P.
1989-01-01
The vertical structure equation is a singular Sturm-Liouville problem whose eigenfunctions describe the vertical dependence of the normal modes of the primitive equations linearized about a given thermal profile. The eigenvalues give the equivalent depths of the modes. The spectrum of the vertical structure equation and the appropriateness of various upper boundary conditions, both for arbitrary thermal profiles were studied. The results depend critically upon whether or not the thermal profile is such that the basic state atmosphere is bounded. In the case of a bounded atmosphere it is shown that the spectrum is always totally discrete, regardless of details of the thermal profile. For the barotropic equivalent depth, which corresponds to the lowest eigen value, upper and lower bounds which depend only on the surface temperature and the atmosphere height were obtained. All eigenfunctions are bounded, but always have unbounded first derivatives. It was proved that the commonly invoked upper boundary condition that vertical velocity must vanish as pressure tends to zero, as well as a number of alternative conditions, is well posed. It was concluded that the vertical structure equation always has a totally discrete spectrum under the assumptions implicit in the primitive equations.
An analysis of the vertical structure equation for arbitrary thermal profiles
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.; Dee, Dick P.
1987-01-01
The vertical structure equation is a singular Sturm-Liouville problem whose eigenfunctions describe the vertical dependence of the normal modes of the primitive equations linearized about a given thermal profile. The eigenvalues give the equivalent depths of the modes. The spectrum of the vertical structure equation and the appropriateness of various upper boundary conditions, both for arbitrary thermal profiles were studied. The results depend critically upon whether or not the thermal profile is such that the basic state atmosphere is bounded. In the case of a bounded atmosphere it is shown that the spectrum is always totally discrete, regardless of details of the thermal profile. For the barotropic equivalent depth, which corresponds to the lowest eigen value, upper and lower bounds which depend only on the surface temperature and the atmosphere height were obtained. All eigenfunctions are bounded, but always have unbounded first derivatives. It was proved that the commonly invoked upper boundary condition that vertical velocity must vanish as pressure tends to zero, as well as a number of alternative conditions, is well posed. It was concluded that the vertical structure equation always has a totally discrete spectrum under the assumptions implicit in the primitive equations.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures.
Neylon, J; Sheng, K; Yu, V; Chen, Q; Low, D A; Kupelian, P; Santhanam, A
2014-10-01
Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.
Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy intomore » a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.« less
Thin-skinned deformation of sedimentary rocks in Valles Marineris, Mars
Metz, Joannah; Grotzinger, John P.; Okubo, Chris; Milliken, Ralph
2010-01-01
Deformation of sedimentary rocks is widespread within Valles Marineris, characterized by both plastic and brittle deformation identified in Candor, Melas, and Ius Chasmata. We identified four deformation styles using HiRISE and CTX images: kilometer-scale convolute folds, detached slabs, folded strata, and pull-apart structures. Convolute folds are detached rounded slabs of material with alternating dark- and light-toned strata and a fold wavelength of about 1 km. The detached slabs are isolated rounded blocks of material, but they exhibit only highly localized evidence of stratification. Folded strata are composed of continuously folded layers that are not detached. Pull-apart structures are composed of stratified rock that has broken off into small irregularly shaped pieces showing evidence of brittle deformation. Some areas exhibit multiple styles of deformation and grade from one type of deformation into another. The deformed rocks are observed over thousands of kilometers, are limited to discrete stratigraphic intervals, and occur over a wide range in elevations. All deformation styles appear to be of likely thin-skinned origin. CRISM reflectance spectra show that some of the deformed sediments contain a component of monohydrated and polyhydrated sulfates. Several mechanisms could be responsible for the deformation of sedimentary rocks in Valles Marineris, such as subaerial or subaqueous gravitational slumping or sliding and soft sediment deformation, where the latter could include impact-induced or seismically induced liquefaction. These mechanisms are evaluated based on their expected pattern, scale, and areal extent of deformation. Deformation produced from slow subaerial or subaqueous landsliding and liquefaction is consistent with the deformation observed in Valles Marineris.
NASA Astrophysics Data System (ADS)
Escobar Gómez, J. D.; Torres-Verdín, C.
2018-03-01
Single-well pressure-diffusion simulators enable improved quantitative understanding of hydraulic-testing measurements in the presence of arbitrary spatial variations of rock properties. Simulators of this type implement robust numerical algorithms which are often computationally expensive, thereby making the solution of the forward modeling problem onerous and inefficient. We introduce a time-domain perturbation theory for anisotropic permeable media to efficiently and accurately approximate the transient pressure response of spatially complex aquifers. Although theoretically valid for any spatially dependent rock/fluid property, our single-phase flow study emphasizes arbitrary spatial variations of permeability and anisotropy, which constitute key objectives of hydraulic-testing operations. Contrary to time-honored techniques, the perturbation method invokes pressure-flow deconvolution to compute the background medium's permeability sensitivity function (PSF) with a single numerical simulation run. Subsequently, the first-order term of the perturbed solution is obtained by solving an integral equation that weighs the spatial variations of permeability with the spatial-dependent and time-dependent PSF. Finally, discrete convolution transforms the constant-flow approximation to arbitrary multirate conditions. Multidimensional numerical simulation studies for a wide range of single-well field conditions indicate that perturbed solutions can be computed in less than a few CPU seconds with relative errors in pressure of <5%, corresponding to perturbations in background permeability of up to two orders of magnitude. Our work confirms that the proposed joint perturbation-convolution (JPC) method is an efficient alternative to analytical and numerical solutions for accurate modeling of pressure-diffusion phenomena induced by Neumann or Dirichlet boundary conditions.
A spectral nudging method for the ACCESS1.3 atmospheric model
NASA Astrophysics Data System (ADS)
Uhe, P.; Thatcher, M.
2015-06-01
A convolution-based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow for flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10-30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.
A spectral nudging method for the ACCESS1.3 atmospheric model
NASA Astrophysics Data System (ADS)
Uhe, P.; Thatcher, M.
2014-10-01
A convolution based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10 to 30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.
Inelastic losses in X-ray absorption theory
NASA Astrophysics Data System (ADS)
Campbell, Luke Whalin
There is a surprising lack of many body effects observed in XAS (X-ray Absorption Spectroscopy) experiments. While collective excitations and other satellite effects account for between 20% and 40% of the spectral weight of the core hole and photoelectron excitation spectrum, the only commonly observed many body effect is a relatively structureless amplitude reduction to the fine structure, typically no more than a 10% effect. As a result, many particle effects are typically neglected in the XAS codes used to predict and interpret modern experiments. To compensate, the amplitude reduction factor is simply fitted to experimental data. In this work, a quasi-boson model is developed to treat the case of XAS, when the system has both a photoelectron and a core hole. We find that there is a strong interference between the extrinsic and intrinsic losses. The interference reduces the excitation amplitudes at low energies where the core hole and photo electron induced excitations tend to cancel. At high energies, the interference vanishes, and the theory reduces to the sudden approximation. The x-ray absorption spectrum including many-body excitations is represented by a convolution of the one-electron absorption spectrum with an energy dependent spectral function. The latter has an asymmetric quasiparticle peak and broad satellite structure. The net result is a phasor sum, which yields the many body amplitude reduction and phase shift of the fine structure oscillations (EXAFS), and possibly additional satellite structure. Calculations for several cases of interest are found to be in reasonable agreement with experiment. Edge singularity effects and deviations from the final state rule arising from this theory are also discussed. The ab initio XAS code FEFF has been extended for calculations of the many body amplitude reduction and phase shift in x-ray spectroscopies. A new broadened plasmon pole self energy is added. The dipole matrix elements are modified to include a projection operator to calculate deviations from the final state rule and edge singularities.
NASA Astrophysics Data System (ADS)
Zhou, Yajun
This thesis employs the topological concept of compactness to deduce robust solutions to two integral equations arising from chemistry and physics: the inverse Laplace problem in chemical kinetics and the vector wave scattering problem in dielectric optics. The inverse Laplace problem occurs in the quantitative understanding of biological processes that exhibit complex kinetic behavior: different subpopulations of transition events from the "reactant" state to the "product" state follow distinct reaction rate constants, which results in a weighted superposition of exponential decay modes. Reconstruction of the rate constant distribution from kinetic data is often critical for mechanistic understandings of chemical reactions related to biological macromolecules. We devise a "phase function approach" to recover the probability distribution of rate constants from decay data in the time domain. The robustness (numerical stability) of this reconstruction algorithm builds upon the continuity of the transformations connecting the relevant function spaces that are compact metric spaces. The robust "phase function approach" not only is useful for the analysis of heterogeneous subpopulations of exponential decays within a single transition step, but also is generalizable to the kinetic analysis of complex chemical reactions that involve multiple intermediate steps. A quantitative characterization of the light scattering is central to many meteoro-logical, optical, and medical applications. We give a rigorous treatment to electromagnetic scattering on arbitrarily shaped dielectric media via the Born equation: an integral equation with a strongly singular convolution kernel that corresponds to a non-compact Green operator. By constructing a quadratic polynomial of the Green operator that cancels out the kernel singularity and satisfies the compactness criterion, we reveal the universality of a real resonance mode in dielectric optics. Meanwhile, exploiting the properties of compact operators, we outline the geometric and physical conditions that guarantee a robust solution to the light scattering problem, and devise an asymptotic solution to the Born equation of electromagnetic scattering for arbitrarily shaped dielectric in a non-perturbative manner.
Cross-Layer Design for Robust and Scalable Video Transmission in Dynamic Wireless Environment
2011-02-01
code rate convolutional codes or prioritized Rate - Compatible Punctured ...34New rate - compatible punctured convolutional codes for Viterbi decoding," IEEE Trans. Communications, Volume 42, Issue 12, pp. 3073-3079, Dec...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise
A Video Transmission System for Severely Degraded Channels
2006-07-01
rate compatible punctured convolutional codes (RCPC) . By separating the SPIHT bitstream...June 2000. 149 [170] J. Hagenauer, Rate - compatible punctured convolutional codes (RCPC codes ) and their applications, IEEE Transactions on...Farvardin [160] used rate compatible convolutional codes . They noticed that for some transmission rates , one of their EEP schemes, which may
There is no MacWilliams identity for convolutional codes. [transmission gain comparison
NASA Technical Reports Server (NTRS)
Shearer, J. B.; Mceliece, R. J.
1977-01-01
An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network
Qu, Xiaobo; He, Yifan
2018-01-01
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.
Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di
2018-03-06
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.
Deep architecture neural network-based real-time image processing for image-guided radiotherapy.
Mori, Shinichiro
2017-08-01
To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Wright, Gavin; Harrold, Natalie; Bownes, Peter
2018-01-01
Aims To compare the accuracies of the convolution and TMR10 Gamma Knife treatment planning algorithms, and assess the impact upon clinical practice of implementing convolution-based treatment planning. Methods Doses calculated by both algorithms were compared against ionisation chamber measurements in homogeneous and heterogeneous phantoms. Relative dose distributions calculated by both algorithms were compared against film-derived 2D isodose plots in a heterogeneous phantom, with distance-to-agreement (DTA) measured at the 80%, 50% and 20% isodose levels. A retrospective planning study compared 19 clinically acceptable metastasis convolution plans against TMR10 plans with matched shot times, allowing novel comparison of true dosimetric parameters rather than total beam-on-time. Gamma analysis and dose-difference analysis were performed on each pair of dose distributions. Results Both algorithms matched point dose measurement within ±1.1% in homogeneous conditions. Convolution provided superior point-dose accuracy in the heterogeneous phantom (-1.1% v 4.0%), with no discernible differences in relative dose distribution accuracy. In our study convolution-calculated plans yielded D99% 6.4% (95% CI:5.5%-7.3%,p<0.001) less than shot matched TMR10 plans. For gamma passing criteria 1%/1mm, 16% of targets had passing rates >95%. The range of dose differences in the targets was 0.2-4.6Gy. Conclusions Convolution provides superior accuracy versus TMR10 in heterogeneous conditions. Implementing convolution would result in increased target doses therefore its implementation may require a revaluation of prescription doses. PMID:29657896
{Γ}-Convergence Analysis of a Generalized XY Model: Fractional Vortices and String Defects
NASA Astrophysics Data System (ADS)
Badal, Rufat; Cicalese, Marco; De Luca, Lucia; Ponsiglione, Marcello
2018-03-01
We propose and analyze a generalized two dimensional XY model, whose interaction potential has n weighted wells, describing corresponding symmetries of the system. As the lattice spacing vanishes, we derive by {Γ}-convergence the discrete-to-continuum limit of this model. In the energy regime we deal with, the asymptotic ground states exhibit fractional vortices, connected by string defects. The {Γ}-limit takes into account both contributions, through a renormalized energy, depending on the configuration of fractional vortices, and a surface energy, proportional to the length of the strings. Our model describes in a simple way several topological singularities arising in Physics and Materials Science. Among them, disclinations and string defects in liquid crystals, fractional vortices and domain walls in micromagnetics, partial dislocations and stacking faults in crystal plasticity.
A Lesson from the LQG String: Diffeomorphism Covariance is Enough
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helling, Robert C.
2009-12-15
The importance of manifest diffeomorphism invariance is often cited as a major strength of the loop approach to the quantization of gravity. We study this in a simple example: The world-sheet theory of the bosonic string. The conventional treatment differs in the choice of vacuum state from the loop inspired one, the latter being invariant while the first being only covariant. We argue that physically only covariance is required and display the physical consequences of the invariant but discontinuous choice in the one dimensional example of the harmonic oscillator. Finally, we demonstrate that discretization of infinitesimally singular expressions as commonmore » in the loop approach is not unique but can be seen in analogy with the choice of higher derivative counter terms.« less
2011-05-01
rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The
Convolution Operation of Optical Information via Quantum Storage
NASA Astrophysics Data System (ADS)
Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan
2017-06-01
We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.
NASA Astrophysics Data System (ADS)
Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko
We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.
High Performance Implementation of 3D Convolutional Neural Networks on a GPU.
Lan, Qiang; Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie
2017-01-01
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.
High Performance Implementation of 3D Convolutional Neural Networks on a GPU
Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie
2017-01-01
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version. PMID:29250109
Convoluted nozzle design for the RL10 derivative 2B engine
NASA Technical Reports Server (NTRS)
1985-01-01
The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.
Sim, K S; Teh, V; Tey, Y C; Kho, T K
2016-11-01
This paper introduces new development technique to improve the Scanning Electron Microscope (SEM) image quality and we name it as sub-blocking multiple peak histogram equalization (SUB-B-MPHE) with convolution operator. By using this new proposed technique, it shows that the new modified MPHE performs better than original MPHE. In addition, the sub-blocking method consists of convolution operator which can help to remove the blocking effect for SEM images after applying this new developed technique. Hence, by using the convolution operator, it effectively removes the blocking effect by properly distributing the suitable pixel value for the whole image. Overall, the SUB-B-MPHE with convolution outperforms the rest of methods. SCANNING 38:492-501, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Exact solutions to model surface and volume charge distributions
NASA Astrophysics Data System (ADS)
Mukhopadhyay, S.; Majumdar, N.; Bhattacharya, P.; Jash, A.; Bhattacharya, D. S.
2016-10-01
Many important problems in several branches of science and technology deal with charges distributed along a line, over a surface and within a volume. Recently, we have made use of new exact analytic solutions of surface charge distributions to develop the nearly exact Boundary Element Method (neBEM) toolkit. This 3D solver has been successful in removing some of the major drawbacks of the otherwise elegant Green's function approach and has been found to be very accurate throughout the computational domain, including near- and far-field regions. Use of truly distributed singularities (in contrast to nodally concentrated ones) on rectangular and right-triangular elements used for discretizing any three-dimensional geometry has essentially removed many of the numerical and physical singularities associated with the conventional BEM. In this work, we will present this toolkit and the development of several numerical models of space charge based on exact closed-form expressions. In one of the models, Particles on Surface (ParSur), the space charge inside a small elemental volume of any arbitrary shape is represented as being smeared on several surfaces representing the volume. From the studies, it can be concluded that the ParSur model is successful in getting the estimates close to those obtained using the first-principles, especially close to and within the cell. In the paper, we will show initial applications of ParSur and other models in problems related to high energy physics.
NASA Astrophysics Data System (ADS)
Gyllenram, W.; Nilsson, H.; Davidson, L.
2007-04-01
This paper analyzes the properties of viscous swirling flow in a pipe. The analysis is based on the time-averaged quasicylindrical Navier-Stokes equations and is applicable to steady, unsteady, and turbulent swirling flow. A method is developed to determine the critical level of swirl (vortex breakdown) for an arbitrary vortex. The method can also be used for an estimation of the radial velocity profile if the other components are given or measured along a single radial line. The quasicylindrical equations are rearranged to yield a single ordinary differential equation for the radial distribution of the radial velocity component. The equation is singular for certain levels of swirl. It is shown that the lowest swirl level at which the equation is singular corresponds exactly to the sufficient condition for axisymmetric vortex breakdown as derived by Wang and Rusak [J. Fluid Mech. 340, 177 (1997)] and Rusak et al. [AIAA J. 36, 1848 (1998)]. In narrow regions around the critical levels of swirl, the solution violates the quasicylindrical assumptions and the flow must undergo a drastic change of structure. The critical swirl level is determined by the sign change of the smallest eigenvalue of the discrete linear operator which relates the radial velocities to effects of viscosity and turbulence. It is shown that neither viscosity nor turbulence directly alters the critical level of swirl.
Scalable Video Transmission Over Multi-Rate Multiple Access Channels
2007-06-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on
Wireless Visual Sensor Network Resource Allocation using Cross-Layer Optimization
2009-01-01
Rate Compatible Punctured Convolutional (RCPC) codes for channel...vol. 44, pp. 2943–2959, November 1998. [22] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE... coding rate for H.264/AVC video compression is determined. At the data link layer, the Rate - Compatible Puctured Convolutional (RCPC) channel coding
The general theory of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Stanley, R. P.
1993-01-01
This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.
Rovelli, Carlo
2008-01-01
The problem of describing the quantum behavior of gravity, and thus understanding quantum spacetime , is still open. Loop quantum gravity is a well-developed approach to this problem. It is a mathematically well-defined background-independent quantization of general relativity, with its conventional matter couplings. Today research in loop quantum gravity forms a vast area, ranging from mathematical foundations to physical applications. Among the most significant results obtained so far are: (i) The computation of the spectra of geometrical quantities such as area and volume, which yield tentative quantitative predictions for Planck-scale physics. (ii) A physical picture of the microstructure of quantum spacetime, characterized by Planck-scale discreteness. Discreteness emerges as a standard quantum effect from the discrete spectra, and provides a mathematical realization of Wheeler's "spacetime foam" intuition. (iii) Control of spacetime singularities, such as those in the interior of black holes and the cosmological one. This, in particular, has opened up the possibility of a theoretical investigation into the very early universe and the spacetime regions beyond the Big Bang. (iv) A derivation of the Bekenstein-Hawking black-hole entropy. (v) Low-energy calculations, yielding n -point functions well defined in a background-independent context. The theory is at the roots of, or strictly related to, a number of formalisms that have been developed for describing background-independent quantum field theory, such as spin foams, group field theory, causal spin networks, and others. I give here a general overview of ideas, techniques, results and open problems of this candidate theory of quantum gravity, and a guide to the relevant literature.
Dispersion analysis of leaky guided waves in fluid-loaded waveguides of generic shape.
Mazzotti, M; Marzani, A; Bartoli, I
2014-01-01
A fully coupled 2.5D formulation is proposed to compute the dispersive parameters of waveguides with arbitrary cross-section immersed in infinite inviscid fluids. The discretization of the waveguide is performed by means of a Semi-Analytical Finite Element (SAFE) approach, whereas a 2.5D BEM formulation is used to model the impedance of the surrounding infinite fluid. The kernels of the boundary integrals contain the fundamental solutions of the space Fourier-transformed Helmholtz equation, which governs the wave propagation process in the fluid domain. Numerical difficulties related to the evaluation of singular integrals are avoided by using a regularization procedure. To improve the numerical stability of the discretized boundary integral equations for the external Helmholtz problem, the so called CHIEF method is used. The discrete wave equation results in a nonlinear eigenvalue problem in the complex axial wavenumbers that is solved at the frequencies of interest by means of a contour integral algorithm. In order to separate physical from non-physical solutions and to fulfill the requirement of holomorphicity of the dynamic stiffness matrix inside the complex wavenumber contour, the phase of the radial bulk wavenumber is uniquely defined by enforcing the Snell-Descartes law at the fluid-waveguide interface. Three numerical applications are presented. The computed dispersion curves for a circular bar immersed in oil are in agreement with those extracted using the Global Matrix Method. Novel results are presented for viscoelastic steel bars of square and L-shaped cross-section immersed in water. Copyright © 2013 Elsevier B.V. All rights reserved.
Rose, D. V.; Madrid, E. A.; Welch, D. R.; ...
2015-03-04
Numerical simulations of a vacuum post-hole convolute driven by magnetically insulated vacuum transmission lines (MITLs) are used to study current losses due to charged particle emission from the MITL-convolute-system electrodes. This work builds on the results of a previous study [E.A. Madrid et al. Phys. Rev. ST Accel. Beams 16, 120401 (2013)] and adds realistic power pulses, Ohmic heating of anode surfaces, and a model for the formation and evolution of cathode plasmas. The simulations suggest that modestly larger anode-cathode gaps in the MITLs upstream of the convolute result in significantly less current loss. In addition, longer pulse durations leadmore » to somewhat greater current loss due to cathode-plasma expansion. These results can be applied to the design of future MITL-convolute systems for high-current pulsed-power systems.« less
Classification of urine sediment based on convolution neural network
NASA Astrophysics Data System (ADS)
Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian
2018-04-01
By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.
NASA Astrophysics Data System (ADS)
Wuthrich, Christian
My dissertation studies the foundations of loop quantum gravity (LQG), a candidate for a quantum theory of gravity based on classical general relativity. At the outset, I discuss two---and I claim separate---questions: first, do we need a quantum theory of gravity at all; and second, if we do, does it follow that gravity should or even must be quantized? My evaluation of different arguments either way suggests that while no argument can be considered conclusive, there are strong indications that gravity should be quantized. LQG attempts a canonical quantization of general relativity and thereby provokes a foundational interest as it must take a stance on many technical issues tightly linked to the interpretation of general relativity. Most importantly, it codifies general relativity's main innovation, the so-called background independence, in a formalism suitable for quantization. This codification pulls asunder what has been joined together in general relativity: space and time. It is thus a central issue whether or not general relativity's four-dimensional structure can be retrieved in the alternative formalism and how it fares through the quantization process. I argue that the rightful four-dimensional spacetime structure can only be partially retrieved at the classical level. What happens at the quantum level is an entirely open issue. Known examples of classically singular behaviour which gets regularized by quantization evoke an admittedly pious hope that the singularities which notoriously plague the classical theory may be washed away by quantization. This work scrutinizes pronouncements claiming that the initial singularity of classical cosmological models vanishes in quantum cosmology based on LQG and concludes that these claims must be severely qualified. In particular, I explicate why casting the quantum cosmological models in terms of a deterministic temporal evolution fails to capture the concepts at work adequately. Finally, a scheme is developed of how the re-emergence of the smooth spacetime from the underlying discrete quantum structure could be understood.
The Semantics of Plurals: A Defense of Singularism
ERIC Educational Resources Information Center
Florio, Salvatore
2010-01-01
In this dissertation, I defend "semantic singularism", which is the view that syntactically plural terms, such as "they" or "Russell and Whitehead", are semantically singular. A semantically singular term is a term that denotes a single entity. Semantic singularism is to be distinguished from "syntactic singularism", according to which…
Estimation for general birth-death processes
Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.
2013-01-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261
Estimation for general birth-death processes.
Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A
2014-04-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.
Korany, Mohamed A; Maher, Hadir M; Galal, Shereen M; Ragab, Marwa A A
2013-05-01
This manuscript discusses the application and the comparison between three statistical regression methods for handling data: parametric, nonparametric, and weighted regression (WR). These data were obtained from different chemometric methods applied to the high-performance liquid chromatography response data using the internal standard method. This was performed on a model drug Acyclovir which was analyzed in human plasma with the use of ganciclovir as internal standard. In vivo study was also performed. Derivative treatment of chromatographic response ratio data was followed by convolution of the resulting derivative curves using 8-points sin x i polynomials (discrete Fourier functions). This work studies and also compares the application of WR method and Theil's method, a nonparametric regression (NPR) method with the least squares parametric regression (LSPR) method, which is considered the de facto standard method used for regression. When the assumption of homoscedasticity is not met for analytical data, a simple and effective way to counteract the great influence of the high concentrations on the fitted regression line is to use WR method. WR was found to be superior to the method of LSPR as the former assumes that the y-direction error in the calibration curve will increase as x increases. Theil's NPR method was also found to be superior to the method of LSPR as the former assumes that errors could occur in both x- and y-directions and that might not be normally distributed. Most of the results showed a significant improvement in the precision and accuracy on applying WR and NPR methods relative to LSPR.
Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houshmand, Monireh; Hosseini-Khayat, Saied
2011-02-15
Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation andmore » practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.« less
Coset Codes Viewed as Terminated Convolutional Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1996-01-01
In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.
Singularities in Optimal Structural Design
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Guptill, J. D.; Berke, L.
1992-01-01
Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.
Singularities in optimal structural design
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Guptill, J. D.; Berke, L.
1992-01-01
Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.
Naked singularity resolution in cylindrical collapse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurita, Yasunari; Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto, 606-8502; Nakao, Ken-ichi
In this paper, we study the gravitational collapse of null dust in cylindrically symmetric spacetime. The naked singularity necessarily forms at the symmetry axis. We consider the situation in which null dust is emitted again from the naked singularity formed by the collapsed null dust and investigate the backreaction by this emission for the naked singularity. We show a very peculiar but physically important case in which the same amount of null dust as that of the collapsed one is emitted from the naked singularity as soon as the ingoing null dust hits the symmetry axis and forms the nakedmore » singularity. In this case, although this naked singularity satisfies the strong curvature condition by Krolak (limiting focusing condition), geodesics which hit the singularity can be extended uniquely across the singularity. Therefore, we may say that the collapsing null dust passes through the singularity formed by itself and then leaves for infinity. Finally, the singularity completely disappears and the flat spacetime remains.« less
Deformations, moduli stabilisation and gauge couplings at one-loop
NASA Astrophysics Data System (ADS)
Honecker, Gabriele; Koltermann, Isabel; Staessens, Wieland
2017-04-01
We investigate deformations of Z_2 orbifold singularities on the toroidal orbifold {T}^6/(Z_2× Z_6) with discrete torsion in the framework of Type IIA orientifold model building with intersecting D6-branes wrapping special Lagrangian cycles. To this aim, we employ the hypersurface formalism developed previously for the orbifold {T}^6/(Z_2× Z_6) with discrete torsion and adapt it to the (Z_2× Z_6× Ω R) point group by modding out the remaining Z_3 subsymmetry and the orientifold projection Ω R. We first study the local behaviour of the Z_3× Ω R invariant deformation orbits under non-zero deformation and then develop methods to assess the deformation effects on the fractional three-cycle volumes globally. We confirm that D6-branes supporting USp(2 N) or SO(2 N) gauge groups do not constrain any deformation, while deformation parameters associated to cycles wrapped by D6-branes with U( N) gauge groups are constrained by D-term supersymmetry breaking. These features are exposed in global prototype MSSM, Left-Right symmetric and Pati-Salam models first constructed in [1, 2], for which we here count the number of stabilised moduli and study flat directions changing the values of some gauge couplings.
Structure Calculation and Reconstruction of Discrete-State Dynamics from Residual Dipolar Couplings.
Cole, Casey A; Mukhopadhyay, Rishi; Omar, Hanin; Hennig, Mirko; Valafar, Homayoun
2016-04-12
Residual dipolar couplings (RDCs) acquired by nuclear magnetic resonance (NMR) spectroscopy are an indispensable source of information in investigation of molecular structures and dynamics. Here, we present a comprehensive strategy for structure calculation and reconstruction of discrete-state dynamics from RDC data that is based on the singular value decomposition (SVD) method of order tensor estimation. In addition to structure determination, we provide a mechanism of producing an ensemble of conformations for the dynamical regions of a protein from RDC data. The developed methodology has been tested on simulated RDC data with ±1 Hz of error from an 83 residue α protein (PDB ID 1A1Z ) and a 213 residue α/β protein DGCR8 (PDB ID 2YT4 ). In nearly all instances, our method reproduced the structure of the protein including the conformational ensemble to within less than 2 Å. On the basis of our investigations, arc motions with more than 30° of rotation are identified as internal dynamics and are reconstructed with sufficient accuracy. Furthermore, states with relative occupancies above 20% are consistently recognized and reconstructed successfully. Arc motions with a magnitude of 15° or relative occupancy of less than 10% are consistently unrecognizable as dynamical regions within the context of ±1 Hz of error.
Phillips, Carolyn L.; Peterka, Tom; Karpeyev, Dmitry; ...
2015-02-20
In type II superconductors, the dynamics of superconducting vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter. Extracting their precise positions and motion from discretized numerical simulation data is an important, but challenging, task. In the past, vortices have mostly been detected by analyzing the magnitude of the complex scalar field representing the order parameter and visualized by corresponding contour plots and isosurfaces. However, these methods, primarily used for small-scale simulations, blur the fine details of the vortices, scale poorly to large-scale simulations, and do not easily enable isolating andmore » tracking individual vortices. In this paper, we present a method for exactly finding the vortex core lines from a complex order parameter field. With this method, vortices can be easily described at a resolution even finer than the mesh itself. The precise determination of the vortex cores allows the interplay of the vortices inside a model superconductor to be visualized in higher resolution than has previously been possible. Finally, by representing the field as the set of vortices, this method also massively reduces the data footprint of the simulations and provides the data structures for further analysis and feature tracking.« less
Viscid-inviscid interaction associated with incompressible flow past wedges at high Reynolds number
NASA Technical Reports Server (NTRS)
Warpinski, N. R.; Chow, W. L.
1977-01-01
An analytical method is suggested for the study of the viscid inviscid interaction associated with incompressible flow past wedges with arbitrary angles. It is shown that the determination of the nearly constant pressure (base pressure) prevailing within the near wake is really the heart of the problem, and the pressure can only be established from these interactive considerations. The basic free streamline flow field is established through two discrete parameters which adequately describe the inviscid flow around the body and the wake. The viscous flow processes such as the boundary layer buildup, turbulent jet mixing, and recompression are individually analyzed and attached to the inviscid flow in the sense of the boundary layer concept. The interaction between the viscous and inviscid streams is properly displayed by the fact that the aforementioned discrete parameters needed for the inviscid flow are determined by the viscous flow condition at the point of reattachment. It is found that the reattachment point behaves as a saddle point singularity for the system of equations describing the recompressive viscous flow processes, and this behavior is exploited for the establishment of the overall flow field. Detailed results such as the base pressure, pressure distributions on the wedge, and the geometry of the wake are determined as functions of the wedge angle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogawa, T.
The exact equivalence between a bad-cavity laser with modulated inversion and a nonlinear oscillator in a Toda potential driven by an external modulation is presented. The dynamical properties of the laser system are investigated in detail by analyzing a Toda oscillator system. The temporal characteristics of the bad-cavity laser under strong modulation are analyzed extensively by numerically investigating the simpler Toda system as a function of two control parameters: the dc component of the population inversion and the modulation amplitude. The system exhibits two kinds of optical chaos: One is the quasiperiodic chaos in the region of the intermediate modulationmore » amplitude and the other is the intermittent kicked chaos in the region of strong modulation and large dc component of the pumping. The former is well described by a one-dimensional discrete map with a singular invariant probability measure. There are two types of onset of the chaos: quasiperiodic instability (continuous path to chaos) and catastrophic crisis (discontinuous path). The period-doubling cascade of bifurcation is also observed. The simple discrete model of the Toda system is presented to obtain analytically the one-dimensional map function and to understand the effect of the asymmetric potential curvature on yielding chaos.« less
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
Using convolutional decoding to improve time delay and phase estimation in digital communications
Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM
2010-01-26
The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.
NASA Astrophysics Data System (ADS)
Maltz, Jonathan S.
2000-11-01
We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.
Selection of the best features for leukocytes classification in blood smear microscopic images
NASA Astrophysics Data System (ADS)
Sarrafzadeh, Omid; Rabbani, Hossein; Talebi, Ardeshir; Banaem, Hossein Usefi
2014-03-01
Automatic differential counting of leukocytes provides invaluable information to pathologist for diagnosis and treatment of many diseases. The main objective of this paper is to detect leukocytes from a blood smear microscopic image and classify them into their types: Neutrophil, Eosinophil, Basophil, Lymphocyte and Monocyte using features that pathologists consider to differentiate leukocytes. Features contain color, geometric and texture features. Colors of nucleus and cytoplasm vary among the leukocytes. Lymphocytes have single, large, round or oval and Monocytes have singular convoluted shape nucleus. Nucleus of Eosinophils is divided into 2 segments and nucleus of Neutrophils into 2 to 5 segments. Lymphocytes often have no granules, Monocytes have tiny granules, Neutrophils have fine granules and Eosinophils have large granules in cytoplasm. Six color features is extracted from both nucleus and cytoplasm, 6 geometric features only from nucleus and 6 statistical features and 7 moment invariants features only from cytoplasm of leukocytes. These features are fed to support vector machine (SVM) classifiers with one to one architecture. The results obtained by applying the proposed method on blood smear microscopic image of 10 patients including 149 white blood cells (WBCs) indicate that correct rate for all classifiers are above 93% which is in a higher level in comparison with previous literatures.
Model-free quantification of dynamic PET data using nonparametric deconvolution
Zanderigo, Francesca; Parsey, Ramin V; Todd Ogden, R
2015-01-01
Dynamic positron emission tomography (PET) data are usually quantified using compartment models (CMs) or derived graphical approaches. Often, however, CMs either do not properly describe the tracer kinetics, or are not identifiable, leading to nonphysiologic estimates of the tracer binding. The PET data are modeled as the convolution of the metabolite-corrected input function and the tracer impulse response function (IRF) in the tissue. Using nonparametric deconvolution methods, it is possible to obtain model-free estimates of the IRF, from which functionals related to tracer volume of distribution and binding may be computed, but this approach has rarely been applied in PET. Here, we apply nonparametric deconvolution using singular value decomposition to simulated and test–retest clinical PET data with four reversible tracers well characterized by CMs ([11C]CUMI-101, [11C]DASB, [11C]PE2I, and [11C]WAY-100635), and systematically compare reproducibility, reliability, and identifiability of various IRF-derived functionals with that of traditional CMs outcomes. Results show that nonparametric deconvolution, completely free of any model assumptions, allows for estimates of tracer volume of distribution and binding that are very close to the estimates obtained with CMs and, in some cases, show better test–retest performance than CMs outcomes. PMID:25873427
Single image super-resolution based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia
2018-03-01
We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Molecular graph convolutions: moving beyond fingerprints
NASA Astrophysics Data System (ADS)
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Molecular graph convolutions: moving beyond fingerprints.
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Cycle of phase, coherence and polarization singularities in Young's three-pinhole experiment.
Pang, Xiaoyan; Gbur, Greg; Visser, Taco D
2015-12-28
It is now well-established that a variety of singularities can be characterized and observed in optical wavefields. It is also known that these phase singularities, polarization singularities and coherence singularities are physically related, but the exact nature of their relationship is still somewhat unclear. We show how a Young-type three-pinhole interference experiment can be used to create a continuous cycle of transformations between classes of singularities, often accompanied by topological reactions in which different singularities are created and annihilated. This arrangement serves to clarify the relationships between the different singularity types, and provides a simple tool for further exploration.
Numerical analysis of singular solutions of two-dimensional problems of asymmetric elasticity
NASA Astrophysics Data System (ADS)
Korepanov, V. V.; Matveenko, V. P.; Fedorov, A. Yu.; Shardakov, I. N.
2013-07-01
An algorithm for the numerical analysis of singular solutions of two-dimensional problems of asymmetric elasticity is considered. The algorithm is based on separation of a power-law dependence from the finite-element solution in a neighborhood of singular points in the domain under study, where singular solutions are possible. The obtained power-law dependencies allow one to conclude whether the stresses have singularities and what the character of these singularities is. The algorithm was tested for problems of classical elasticity by comparing the stress singularity exponents obtained by the proposed method and from known analytic solutions. Problems with various cases of singular points, namely, body surface points at which either the smoothness of the surface is violated, or the type of boundary conditions is changed, or distinct materials are in contact, are considered as applications. The stress singularity exponents obtained by using the models of classical and asymmetric elasticity are compared. It is shown that, in the case of cracks, the stress singularity exponents are the same for the elasticity models under study, but for other cases of singular points, the stress singularity exponents obtained on the basis of asymmetric elasticity have insignificant quantitative distinctions from the solutions of the classical elasticity.
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
On the computation of molecular surface correlations for protein docking using fourier techniques.
Sakk, Eric
2007-08-01
The computation of surface correlations using a variety of molecular models has been applied to the unbound protein docking problem. Because of the computational complexity involved in examining all possible molecular orientations, the fast Fourier transform (FFT) (a fast numerical implementation of the discrete Fourier transform (DFT)) is generally applied to minimize the number of calculations. This approach is rooted in the convolution theorem which allows one to inverse transform the product of two DFTs in order to perform the correlation calculation. However, such a DFT calculation results in a cyclic or "circular" correlation which, in general, does not lead to the same result as the linear correlation desired for the docking problem. In this work, we provide computational bounds for constructing molecular models used in the molecular surface correlation problem. The derived bounds are then shown to be consistent with various intuitive guidelines previously reported in the protein docking literature. Finally, these bounds are applied to different molecular models in order to investigate their effect on the correlation calculation.
Uncertainty in simulated groundwater-quality trends in transient flow
Starn, J. Jeffrey; Bagtzoglou, Amvrossios; Robbins, Gary A.
2013-01-01
In numerical modeling of groundwater flow, the result of a given solution method is affected by the way in which transient flow conditions and geologic heterogeneity are simulated. An algorithm is demonstrated that simulates breakthrough curves at a pumping well by convolution-based particle tracking in a transient flow field for several synthetic basin-scale aquifers. In comparison to grid-based (Eulerian) methods, the particle (Lagrangian) method is better able to capture multimodal breakthrough caused by changes in pumping at the well, although the particle method may be apparently nonlinear because of the discrete nature of particle arrival times. Trial-and-error choice of number of particles and release times can perhaps overcome the apparent nonlinearity. Heterogeneous aquifer properties tend to smooth the effects of transient pumping, making it difficult to separate their effects in parameter estimation. Porosity, a new parameter added for advective transport, can be accurately estimated using both grid-based and particle-based methods, but predictions can be highly uncertain, even in the simple, nonreactive case.
NASA Technical Reports Server (NTRS)
Collins, Jeffery D.; Volakis, John L.; Jin, Jian-Ming
1990-01-01
A new technique is presented for computing the scattering by 2-D structures of arbitrary composition. The proposed solution approach combines the usual finite element method with the boundary-integral equation to formulate a discrete system. This is subsequently solved via the conjugate gradient (CG) algorithm. A particular characteristic of the method is the use of rectangular boundaries to enclose the scatterer. Several of the resulting boundary integrals are therefore convolutions and may be evaluated via the fast Fourier transform (FFT) in the implementation of the CG algorithm. The solution approach offers the principal advantage of having O(N) memory demand and employs a 1-D FFT versus a 2-D FFT as required with a traditional implementation of the CGFFT algorithm. The speed of the proposed solution method is compared with that of the traditional CGFFT algorithm, and results for rectangular bodies are given and shown to be in excellent agreement with the moment method.
NASA Technical Reports Server (NTRS)
Collins, Jeffery D.; Volakis, John L.
1989-01-01
A new technique is presented for computing the scattering by 2-D structures of arbitrary composition. The proposed solution approach combines the usual finite element method with the boundary integral equation to formulate a discrete system. This is subsequently solved via the conjugate gradient (CG) algorithm. A particular characteristic of the method is the use of rectangular boundaries to enclose the scatterer. Several of the resulting boundary integrals are therefore convolutions and may be evaluated via the fast Fourier transform (FFT) in the implementation of the CG algorithm. The solution approach offers the principle advantage of having O(N) memory demand and employs a 1-D FFT versus a 2-D FFT as required with a traditional implementation of the CGFFT algorithm. The speed of the proposed solution method is compared with that of the traditional CGFFT algorithm, and results for rectangular bodies are given and shown to be in excellent agreement with the moment method.
Multichannel blind iterative image restoration.
Sroubek, Filip; Flusser, Jan
2003-01-01
Blind image deconvolution is required in many applications of microscopy imaging, remote sensing, and astronomical imaging. Unfortunately in a single-channel framework, serious conceptual and numerical problems are often encountered. Very recently, an eigenvector-based method (EVAM) was proposed for a multichannel framework which determines perfectly convolution masks in a noise-free environment if channel disparity, called co-primeness, is satisfied. We propose a novel iterative algorithm based on recent anisotropic denoising techniques of total variation and a Mumford-Shah functional with the EVAM restoration condition included. A linearization scheme of half-quadratic regularization together with a cell-centered finite difference discretization scheme is used in the algorithm and provides a unified approach to the solution of total variation or Mumford-Shah. The algorithm performs well even on very noisy images and does not require an exact estimation of mask orders. We demonstrate capabilities of the algorithm on synthetic data. Finally, the algorithm is applied to defocused images taken with a digital camera and to data from astronomical ground-based observations of the Sun.
NASA Astrophysics Data System (ADS)
Santoli, Salvatore
1994-01-01
The mechanistic interpretation of the communication process between cognitive hierarchical systems as an iterated pair of convolutions between the incoming discrete time series signals and the chaotic dynamics (CD) at the nm-scale of the perception (energy) wetware level, with the consequent feeding of the resulting collective properties to the CD software (symbolic) level, shows that the category of quality, largely present in Galilean quantitative-minded science, is to be increasingly made into quantity for finding optimum common codes for communication between different intelligent beings. The problem is similar to that solved by biological evolution, of communication between the conscious logic brain and the underlying unfelt ultimate extra-logical processes, as well as to the problem of the mind-body or the structure-function dichotomies. Perspective cybernated nanotechnological and/or nanobiological interfaces, and time evolution of the 'contact language' (the iterated dialogic process) as a self-organising system might improve human-alien understanding.
Mass quantization of the Schwarzschild black hole
NASA Astrophysics Data System (ADS)
Vaz, Cenalo; Witten, Louis
1999-07-01
We examine the Wheeler-DeWitt equation for a static, eternal Schwarzschild black hole in Kuchař-Brown variables and obtain its energy eigenstates. Consistent solutions vanish in the exterior of the Kruskal manifold and are nonvanishing only in the interior. The system is reminiscent of a particle in a box. States of definite parity avoid the singular geometry by vanishing at the origin. These definite parity states admit a discrete energy spectrum, depending on one quantum number which determines the Arnowitt-Deser-Misner mass of the black hole according to a relation conjectured long ago by Bekenstein M~nMp. If attention is restricted only to these quantized energy states, a black hole is described not only by its mass but also by its parity. States of indefinite parity do not admit a quantized mass spectrum.
An adaptive grid scheme using the boundary element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munipalli, R.; Anderson, D.A.
1996-09-01
A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less
On important precursor of singular optics (tutorial)
NASA Astrophysics Data System (ADS)
Polyanskii, Peter V.; Felde, Christina V.; Bogatyryova, Halina V.; Konovchuk, Alexey V.
2018-01-01
The rise of singular optics is usually associated with the seminal paper by J. F. Nye and M. V. Berry [Proc. R. Soc. Lond. A, 336, 165-189 (1974)]. Intense development of this area of modern photonics has started since the early eighties of the XX century due to invention of the interfrence technique for detection and diagnostics of phase singularities, such as optical vortices in complex speckle-structured light fields. The next powerful incentive for formation of singular optics into separate area of the science on light was connectected with discovering of very practical technique for creation of singular optical beams of various kinds on the base of computer-generated holograms. In the eghties and ninetieth of the XX century, singular optics evolved, almost entirely, under the approximation of complete coherency of light field. Only at the threshold of the XXI century, it has been comprehended that the singular-optics approaches can be fruitfully expanded onto partially spatially coherent, partially polarized and polychromatic light fields supporting singularities of new kinds, that has been resulted in establishing of correlation singular optics. Here we show that correlation singular optics has much deeper roots, ascending to "pre-singular" and even pre-laser epoch and associated with the concept of partial coherence and polarization. It is remarcable that correlation singular optics in its present interpretation has forestalled the standard coherent singular optics. This paper is timed to the sixtieth anniversary of the most profound precursor of modern correlation singular optics [J. Opt. Soc. Am., 47, 895-902 (1957)].
Meszlényi, Regina J.; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network. PMID:29089883
Face recognition: a convolutional neural-network approach.
Lawrence, S; Giles, C L; Tsoi, A C; Back, A D
1997-01-01
We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.
Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.
ASIC-based architecture for the real-time computation of 2D convolution with large kernel size
NASA Astrophysics Data System (ADS)
Shao, Rui; Zhong, Sheng; Yan, Luxin
2015-12-01
Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.
2007-06-01
17 Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code ...Hamming distance between all pairs of non-zero paths. Table 2 lists the best rate r=2/3, punctured convolutional code information weight structure dB...Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code information weight structure. (From: [12]). K freed freeB
A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE
NASA Technical Reports Server (NTRS)
Truong, T. K.
1994-01-01
This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.
NASA Technical Reports Server (NTRS)
Asbury, Scott C.; Hunter, Craig A.
1999-01-01
An investigation was conducted in the model preparation area of the Langley 16-Foot Transonic Tunnel to determine the effects of convoluted divergent-flap contouring on the internal performance of a fixed-geometry, nonaxisymmetric, convergent-divergent exhaust nozzle. Testing was conducted at static conditions using a sub-scale nozzle model with one baseline and four convoluted configurations. All tests were conducted with no external flow at nozzle pressure ratios from 1.25 to approximately 9.50. Results indicate that baseline nozzle performance was dominated by unstable, shock-induced, boundary-layer separation at overexpanded conditions. Convoluted configurations were found to significantly reduce, and in some cases totally alleviate separation at overexpanded conditions. This result was attributed to the ability of convoluted contouring to energize and improve the condition of the nozzle boundary layer. Separation alleviation offers potential for installed nozzle aeropropulsive (thrust-minus-drag) performance benefits by reducing drag at forward flight speeds, even though this may reduce nozzle thrust ratio as much as 6.4% at off-design conditions. At on-design conditions, nozzle thrust ratio for the convoluted configurations ranged from 1% to 2.9% below the baseline configuration; this was a result of increased skin friction and oblique shock losses inside the nozzle.
Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L
2018-04-01
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
Evolution of singularities in a partially coherent vortex beam.
van Dijk, Thomas; Visser, Taco D
2009-04-01
We study the evolution of phase singularities and coherence singularities in a Laguerre-Gauss beam that is rendered partially coherent by letting it pass through a spatial light modulator. The original beam has an on-axis minumum of intensity--a phase singularity--that transforms into a maximum of the far-field intensity. In contrast, although the original beam has no coherence singularities, such singularities are found to develop as the beam propagates. This disappearance of one kind of singularity and the gradual appearance of another is illustrated with numerical examples.
Naked singularity, firewall, and Hawking radiation.
Zhang, Hongsheng
2017-06-21
Spacetime singularity has always been of interest since the proof of the Penrose-Hawking singularity theorem. Naked singularity naturally emerges from reasonable initial conditions in the collapsing process. A recent interesting approach in black hole information problem implies that we need a firewall to break the surplus entanglements among the Hawking photons. Classically, the firewall becomes a naked singularity. We find some vacuum analytical solutions in R n -gravity of the firewall-type and use these solutions as concrete models to study the naked singularities. By using standard quantum theory, we investigate the Hawking radiation emitted from the black holes with naked singularities. Here we show that the singularity itself does not destroy information. A unitary quantum theory works well around a firewall-type singularity. We discuss the validity of our result in general relativity. Further our result demonstrates that the temperature of the Hawking radiation still can be expressed in the form of the surface gravity divided by 2π. This indicates that a naked singularity may not compromise the Hakwing evaporation process.
On the Weyl curvature hypothesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoica, Ovidiu Cristinel, E-mail: holotronix@gmail.com
2013-11-15
The Weyl curvature hypothesis of Penrose attempts to explain the high homogeneity and isotropy, and the very low entropy of the early universe, by conjecturing the vanishing of the Weyl tensor at the Big-Bang singularity. In previous papers it has been proposed an equivalent form of Einstein’s equation, which extends it and remains valid at an important class of singularities (including in particular the Schwarzschild, FLRW, and isotropic singularities). Here it is shown that if the Big-Bang singularity is from this class, it also satisfies the Weyl curvature hypothesis. As an application, we study a very general example of cosmologicalmore » models, which generalizes the FLRW model by dropping the isotropy and homogeneity constraints. This model also generalizes isotropic singularities, and a class of singularities occurring in Bianchi cosmologies. We show that the Big-Bang singularity of this model is of the type under consideration, and satisfies therefore the Weyl curvature hypothesis. -- Highlights: •The singularities we introduce are described by finite geometric/physical objects. •Our singularities have smooth Riemann and Weyl curvatures. •We show they satisfy Penrose’s Weyl curvature hypothesis (Weyl=0 at singularities). •Examples: FLRW, isotropic singularities, an extension of Schwarzschild’s metric. •Example: a large class of singularities which may be anisotropic and inhomogeneous.« less
NASA Astrophysics Data System (ADS)
Le Bars, Michael; Worster, M. Grae
2006-07-01
A finite-element simulation of binary alloy solidification based on a single-domain formulation is presented and tested. Resolution of phase change is first checked by comparison with the analytical results of Worster [M.G. Worster, Solidification of an alloy from a cooled boundary, J. Fluid Mech. 167 (1986) 481-501] for purely diffusive solidification. Fluid dynamical processes without phase change are then tested by comparison with previous numerical studies of thermal convection in a pure fluid [G. de Vahl Davis, Natural convection of air in a square cavity: a bench mark numerical solution, Int. J. Numer. Meth. Fluids 3 (1983) 249-264; D.A. Mayne, A.S. Usmani, M. Crapper, h-adaptive finite element solution of high Rayleigh number thermally driven cavity problem, Int. J. Numer. Meth. Heat Fluid Flow 10 (2000) 598-615; D.C. Wan, B.S.V. Patnaik, G.W. Wei, A new benchmark quality solution for the buoyancy driven cavity by discrete singular convolution, Numer. Heat Transf. 40 (2001) 199-228], in a porous medium with a constant porosity [G. Lauriat, V. Prasad, Non-darcian effects on natural convection in a vertical porous enclosure, Int. J. Heat Mass Transf. 32 (1989) 2135-2148; P. Nithiarasu, K.N. Seetharamu, T. Sundararajan, Natural convective heat transfer in an enclosure filled with fluid saturated variable porosity medium, Int. J. Heat Mass Transf. 40 (1997) 3955-3967] and in a mixed liquid-porous medium with a spatially variable porosity [P. Nithiarasu, K.N. Seetharamu, T. Sundararajan, Natural convective heat transfer in an enclosure filled with fluid saturated variable porosity medium, Int. J. Heat Mass Transf. 40 (1997) 3955-3967; N. Zabaras, D. Samanta, A stabilized volume-averaging finite element method for flow in porous media and binary alloy solidification processes, Int. J. Numer. Meth. Eng. 60 (2004) 1103-1138]. Finally, new benchmark solutions for simultaneous flow through both fluid and porous domains and for convective solidification processes are presented, based on the similarity solutions in corner-flow geometries recently obtained by Le Bars and Worster [M. Le Bars, M.G. Worster, Interfacial conditions between a pure fluid and a porous medium: implications for binary alloy solidification, J. Fluid Mech. (in press)]. Good agreement is found for all tests, hence validating our physical and numerical methods. More generally, the computations presented here could now be considered as standard and reliable analytical benchmarks for numerical simulations, specifically and independently testing the different processes underlying binary alloy solidification.
Iterative deep convolutional encoder-decoder network for medical image segmentation.
Jung Uk Kim; Hak Gu Kim; Yong Man Ro
2017-07-01
In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.
Reconfigurable Gabor Filter For Fingerprint Recognition Using FPGA Verilog
NASA Astrophysics Data System (ADS)
Rosshidi, H. T.; Hadi, A. R.
2009-06-01
This paper present the implementations of Gabor filter for fingerprint recognition using Verilog HDL. This work demonstrates the application of Gabor Filter technique to enhance the fingerprint image. The incoming signal in form of image pixel will be filter out or convolute by the Gabor filter to define the ridge and valley regions of fingerprint. This is done with the application of a real time convolve based on Field Programmable Gate Array (FPGA) to perform the convolution operation. The main characteristic of the proposed approach are the usage of memory to store the incoming image pixel and the coefficient of the Gabor filter before the convolution matrix take place. The result was the signal convoluted with the Gabor coefficient.
Convolutional neural network for road extraction
NASA Astrophysics Data System (ADS)
Li, Junping; Ding, Yazhou; Feng, Fajie; Xiong, Baoyu; Cui, Weihong
2017-11-01
In this paper, the convolution neural network with large block input and small block output was used to extract road. To reflect the complex road characteristics in the study area, a deep convolution neural network VGG19 was conducted for road extraction. Based on the analysis of the characteristics of different sizes of input block, output block and the extraction effect, the votes of deep convolutional neural networks was used as the final road prediction. The study image was from GF-2 panchromatic and multi-spectral fusion in Yinchuan. The precision of road extraction was 91%. The experiments showed that model averaging can improve the accuracy to some extent. At the same time, this paper gave some advice about the choice of input block size and output block size.
Molecular graph convolutions: moving beyond fingerprints
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-01-01
Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
Resolution of quantum singularities
NASA Astrophysics Data System (ADS)
Konkowski, Deborah; Helliwell, Thomas
2017-01-01
A review of quantum singularities in static and conformally static spacetimes is given. A spacetime is said to be quantum mechanically non-singular if a quantum wave packet does not feel, in some sense, the presence of a singularity; mathematically, this means that the wave operator is essentially self-adjoint on the space of square integrable functions. Spacetimes with classical mild singularities (quasiregular ones) to spacetimes with classical strong curvature singularities have been tested. Here we discuss the similarities and differences between classical singularities that are healed quantum mechanically and those that are not. Possible extensions of the mathematical technique to more physically realistic spacetimes are discussed.
The geometry of singularities and the black hole information paradox
NASA Astrophysics Data System (ADS)
Stoica, O. C.
2015-07-01
The information loss occurs in an evaporating black hole only if the time evolution ends at the singularity. But as we shall see, the black hole solutions admit analytical extensions beyond the singularities, to globally hyperbolic solutions. The method used is similar to that for the apparent singularity at the event horizon, but at the singularity, the resulting metric is degenerate. When the metric is degenerate, the covariant derivative, the curvature, and the Einstein equation become singular. However, recent advances in the geometry of spacetimes with singular metric show that there are ways to extend analytically the Einstein equation and other field equations beyond such singularities. This means that the information can get out of the singularity. In the case of charged black holes, the obtained solutions have nonsingular electromagnetic field. As a bonus, if particles are such black holes, spacetime undergoes dimensional reduction effects like those required by some approaches to perturbative Quantum Gravity.
Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers.
Sochat, Vanessa V; Prybol, Cameron J; Kurtzer, Gregory M
2017-01-01
Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub's primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers.
Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers
Prybol, Cameron J.; Kurtzer, Gregory M.
2017-01-01
Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub’s primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers. PMID:29186161
Big bounce with finite-time singularity: The F(R) gravity description
NASA Astrophysics Data System (ADS)
Odintsov, S. D.; Oikonomou, V. K.
An alternative to the Big Bang cosmologies is obtained by the Big Bounce cosmologies. In this paper, we study a bounce cosmology with a Type IV singularity occurring at the bouncing point in the context of F(R) modified gravity. We investigate the evolution of the Hubble radius and we examine the issue of primordial cosmological perturbations in detail. As we demonstrate, for the singular bounce, the primordial perturbations originating from the cosmological era near the bounce do not produce a scale-invariant spectrum and also the short wavelength modes after these exit the horizon, do not freeze, but grow linearly with time. After presenting the cosmological perturbations study, we discuss the viability of the singular bounce model, and our results indicate that the singular bounce must be combined with another cosmological scenario, or should be modified appropriately, in order that it leads to a viable cosmology. The study of the slow-roll parameters leads to the same result indicating that the singular bounce theory is unstable at the singularity point for certain values of the parameters. We also conformally transform the Jordan frame singular bounce, and as we demonstrate, the Einstein frame metric leads to a Big Rip singularity. Therefore, the Type IV singularity in the Jordan frame becomes a Big Rip singularity in the Einstein frame. Finally, we briefly study a generalized singular cosmological model, which contains two Type IV singularities, with quite appealing features.
NASA Astrophysics Data System (ADS)
Ge, Yongbin; Cao, Fujun
2011-05-01
In this paper, a multigrid method based on the high order compact (HOC) difference scheme on nonuniform grids, which has been proposed by Kalita et al. [J.C. Kalita, A.K. Dass, D.C. Dalal, A transformation-free HOC scheme for steady convection-diffusion on non-uniform grids, Int. J. Numer. Methods Fluids 44 (2004) 33-53], is proposed to solve the two-dimensional (2D) convection diffusion equation. The HOC scheme is not involved in any grid transformation to map the nonuniform grids to uniform grids, consequently, the multigrid method is brand-new for solving the discrete system arising from the difference equation on nonuniform grids. The corresponding multigrid projection and interpolation operators are constructed by the area ratio. Some boundary layer and local singularity problems are used to demonstrate the superiority of the present method. Numerical results show that the multigrid method with the HOC scheme on nonuniform grids almost gets as equally efficient convergence rate as on uniform grids and the computed solution on nonuniform grids retains fourth order accuracy while on uniform grids just gets very poor solution for very steep boundary layer or high local singularity problems. The present method is also applied to solve the 2D incompressible Navier-Stokes equations using the stream function-vorticity formulation and the numerical solutions of the lid-driven cavity flow problem are obtained and compared with solutions available in the literature.
Singularity in structural optimization
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Guptill, J. D.; Berke, L.
1993-01-01
The conditions under which global and local singularities may arise in structural optimization are examined. Examples of these singularities are presented, and a framework is given within which the singularities can be recognized. It is shown, in particular, that singularities can be identified through the analysis of stress-displacement relations together with compatibility conditions or the displacement-stress relations derived by the integrated force method of structural analysis. Methods of eliminating the effects of singularities are suggested and illustrated numerically.
NASA Technical Reports Server (NTRS)
Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.
2007-01-01
Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasil'ev, Vasilii I; Soskin, M S
2013-02-28
A natural singular dynamics of elliptically polarised speckle-fields induced by the 'optical damage' effect in a photorefractive crystal of lithium niobate by a passing beam of a helium - neon laser is studied by the developed methods of singular optics. For the polarisation singularities (C points), a new class of chain reactions, namely, singular chain reactions are discovered and studied. It is shown that they obey the topological charge and sum Poincare index conservation laws. In addition, they exist for all the time of crystal irradiation. They consist of a series of interlocking chains, where singularity pairs arising in amore » chain annihilate with singularities from neighbouring independently created chains. Less often singular 'loop' reactions are observed where arising pairs of singularities annihilate after reversible transformations in within the boundaries of a single speckle. The type of a singular reaction is determined by a topology and dynamics of the speckles, in which the reactions are developing. (laser optics 2012)« less
A digital pixel cell for address event representation image convolution processing
NASA Astrophysics Data System (ADS)
Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.
2006-12-01
Convolutional encoder of rate 1/2 (From [10]). Table 3 shows the puncturing patterns used to derive the different code rates . X precedes Y in the order... convolutional code with puncturing configuration (From [10])......11 Table 4. Mandatory channel coding per modulation (From [10...a concatenation of a Reed– Solomon outer code and a rate -adjustable convolutional inner code . At the transmitter, data shall first be encoded with
Synchronization Analysis and Simulation of a Standard IEEE 802.11G OFDM Signal
2004-03-01
Figure 26 Convolutional Encoder Parameters. Figure 27 Puncturing Parameters. As per Table 3, the required code rate is 3 4r = which requires...to achieve the higher data rates required by the Standard 802.11b was accomplished by using packet binary convolutional coding (PBCC). Essentially...higher data rates are achieved by using convolutional coding combined with BPSK or QPSK modulation. The data is first encoded with a rate one-half
Design and System Implications of a Family of Wideband HF Data Waveforms
2010-09-01
code rates (i.e. 8/9, 9/10) will be used to attain the highest data rates for surface wave links. Very high puncturing of convolutional codes can...Communication Links”, Edition 1, North Atlantic Treaty Organization, 2009. [14] Yasuda, Y., Kashiki, K., Hirata, Y. “High- Rate Punctured Convolutional Codes ...length 7 convolutional code that has been used for over two decades in 110A. In addition, repetition coding and puncturing was
Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.
Huang, Yan; Wang, Wei; Wang, Liang
2018-04-01
Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.
Further Developments in the Communication Link and Error Analysis (CLEAN) Simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1995-01-01
During the period 1 July 1993 - 30 June 1994, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed. Many of these were reported in the Semi-Annual report dated December 1993 which has been included in this report in Appendix A. Since December 1993, a number of additional modules have been added involving Unit-Memory Convolutional codes (UMC). These are: (1) Unit-Memory Convolutional Encoder module (UMCEncd); (2) Hard decision Unit-Memory Convolutional Decoder using the Viterbi decoding algorithm (VitUMC); and (3) a number of utility modules designed to investigate the performance of LTMC's such as LTMC column distance function (UMCdc), UMC free distance function (UMCdfree), UMC row distance function (UMCdr), and UMC Transformation (UMCTrans). The study of UMC's was driven, in part, by the desire to investigate high-rate convolutional codes which are better suited as inner codes for a concatenated coding scheme. A number of high-rate LTMC's were found which are good candidates for inner codes. Besides the further developments of the simulation, a study was performed to construct a table of the best known Unit-Memory Convolutional codes. Finally, a preliminary study of the usefulness of the Periodic Convolutional Interleaver (PCI) was completed and documented in a Technical note dated March 17, 1994. This technical note has also been included in this final report.
The effects of kinesio taping on the color intensity of superficial skin hematomas: A pilot study.
Vercelli, Stefano; Colombo, Claudio; Tolosa, Francesca; Moriondo, Andrea; Bravini, Elisabetta; Ferriero, Giorgio; Francesco, Sartorio
2017-01-01
To analyze the effects of kinesio taping (KT) -applied with three different strains that induced or not the formation of skin creases (called convolutions)- on color intensity of post-surgical superficial hematomas. Single-blind paired study. Rehabilitation clinic. A convenience sample of 13 inpatients with post-surgical superficial hematomas. The tape was applied for 24 consecutive hours. Three tails of KT were randomly applied with different degrees of strain: none (SN); light (SL); and full longitudinal stretch (SF). We expected to obtain correct formation of convolutions with SL, some convolutions with SN, and no convolutions with SF. The change in color intensity of hematomas, measured by means of polar coordinates CIE L*a*b* using a validated and standardized digital images system. Applying KT to hematomas did not significantly change the color intensity in the central area under the tape (p > 0.05). There was a significant treatment effect (p < 0.05) under the edges of the tape, independently of the formation of convolutions (p > 0.05). The changes observed along the edges of the tape could be related to the formation of a pressure gradient between the KT and the adjacent area, but were not dependent on the formation of skin convolutions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Can accretion disk properties observationally distinguish black holes from naked singularities?
NASA Astrophysics Data System (ADS)
Kovács, Z.; Harko, T.
2010-12-01
Naked singularities are hypothetical astrophysical objects, characterized by a gravitational singularity without an event horizon. Penrose has proposed a conjecture, according to which there exists a cosmic censor who forbids the occurrence of naked singularities. Distinguishing between astrophysical black holes and naked singularities is a major challenge for present day observational astronomy. In the context of stationary and axially symmetrical geometries, a possibility of differentiating naked singularities from black holes is through the comparative study of thin accretion disks properties around rotating naked singularities and Kerr-type black holes, respectively. In the present paper, we consider accretion disks around axially-symmetric rotating naked singularities, obtained as solutions of the field equations in the Einstein-massless scalar field theory. A first major difference between rotating naked singularities and Kerr black holes is in the frame dragging effect, the angular velocity of a rotating naked singularity being inversely proportional to its spin parameter. Because of the differences in the exterior geometry, the thermodynamic and electromagnetic properties of the disks (energy flux, temperature distribution and equilibrium radiation spectrum) are different for these two classes of compact objects, consequently giving clear observational signatures that could discriminate between black holes and naked singularities. For specific values of the spin parameter and of the scalar charge, the energy flux from the disk around a rotating naked singularity can exceed by several orders of magnitude the flux from the disk of a Kerr black hole. In addition to this, it is also shown that the conversion efficiency of the accreting mass into radiation by rotating naked singularities is always higher than the conversion efficiency for black holes, i.e., naked singularities provide a much more efficient mechanism for converting mass into radiation than black holes. Thus, these observational signatures may provide the necessary tools from clearly distinguishing rotating naked singularities from Kerr-type black holes.
Are Singularities Integral to General Theory of Relativity?
NASA Astrophysics Data System (ADS)
Krori, K.; Dutta, S.
2011-11-01
Since the 1960s the general relativists have been deeply obsessed with the possibilities of GTR singularities - blackhole as well as cosmological singularities. Senovilla, for the first time, followed by others, showed that there are cylindrically symmetric cosmological space-times which are free of singularities. On the other hand, Krori et al. have presently shown that spherically symmetric cosmological space-times - which later reduce to FRW space-times may also be free of singularities. Besides, Mitra has in the mean-time come forward with some realistic calculations which seem to rule out the possibility of a blackhole singularity. So whether singularities are integral to GTR seems to come under a shadow.
Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter
2017-11-01
Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
Towards dropout training for convolutional neural networks.
Wu, Haibing; Gu, Xiaodong
2015-11-01
Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.
Frame prediction using recurrent convolutional encoder with residual learning
NASA Astrophysics Data System (ADS)
Yue, Boxuan; Liang, Jun
2018-05-01
The prediction for the frame of a video is difficult but in urgent need in auto-driving. Conventional methods can only predict some abstract trends of the region of interest. The boom of deep learning makes the prediction for frames possible. In this paper, we propose a novel recurrent convolutional encoder and DE convolutional decoder structure to predict frames. We introduce the residual learning in the convolution encoder structure to solve the gradient issues. The residual learning can transform the gradient back propagation to an identity mapping. It can reserve the whole gradient information and overcome the gradient issues in Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN). Besides, compared with the branches in CNNs and the gated structures in RNNs, the residual learning can save the training time significantly. In the experiments, we use UCF101 dataset to train our networks, the predictions are compared with some state-of-the-art methods. The results show that our networks can predict frames fast and efficiently. Furthermore, our networks are used for the driving video to verify the practicability.
A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system
NASA Astrophysics Data System (ADS)
Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan
2018-01-01
This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1992-01-01
Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.
Efficient airport detection using region-based fully convolutional neural networks
NASA Astrophysics Data System (ADS)
Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao
2018-04-01
This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.
On the dynamic singularities in the control of free-floating space manipulators
NASA Technical Reports Server (NTRS)
Papadopoulos, E.; Dubowsky, S.
1989-01-01
It is shown that free-floating space manipulator systems have configurations which are dynamically singular. At a dynamically singular position, the manipulator is unable to move its end effector in some direction. This problem appears in any free-floating space manipulator system that permits the vehicle to move in response to manipulator motion without correction from the vehicle's attitude control system. Dynamic singularities are functions of the dynamic properties of the system; their existence and locations cannot be predicted solely from the kinematic structure of the manipulator, unlike the singularities for fixed base manipulators. It is also shown that the location of these dynamic singularities in the workplace is dependent upon the path taken by the manipulator in reaching them. Dynamic singularities must be considered in the control, planning and design of free-floating space manipulator systems. A method for calculating these dynamic singularities is presented, and it is shown that the system parameters can be selected to reduce the effect of dynamic singularities on a system's performance.
NASA Astrophysics Data System (ADS)
Zheng, Guangdi; Pan, Mingbo; Liu, Wei; Wu, Xuetong
2018-03-01
The target identification of the sea battlefield is the prerequisite for the judgment of the enemy in the modern naval battle. In this paper, a collaborative identification method based on convolution neural network is proposed to identify the typical targets of sea battlefields. Different from the traditional single-input/single-output identification method, the proposed method constructs a multi-input/single-output co-identification architecture based on optimized convolution neural network and weighted D-S evidence theory. The simulation results show that
A convolution model for computing the far-field directivity of a parametric loudspeaker array.
Shi, Chuang; Kajikawa, Yoshinobu
2015-02-01
This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.
Finite element techniques applied to cracks interacting with selected singularities
NASA Technical Reports Server (NTRS)
Conway, J. C.
1975-01-01
The finite-element method for computing the extensional stress-intensity factor for cracks approaching selected singularities of varied geometry is described. Stress-intensity factors are generated using both displacement and J-integral techniques, and numerical results are compared to those obtained experimentally in a photoelastic investigation. The selected singularities considered are a colinear crack, a circular penetration, and a notched circular penetration. Results indicate that singularities greatly influence the crack-tip stress-intensity factor as the crack approaches the singularity. In addition, the degree of influence can be regulated by varying the overall geometry of the singularity. Local changes in singularity geometry have little effect on the stress-intensity factor for the cases investigated.
Zhang, Yongtao; Cui, Yan; Wang, Fei; Cai, Yangjian
2015-05-04
We have investigated the correlation singularities, coherence vortices of two-point correlation function in a partially coherent vector beam with initially radial polarization, i.e., partially coherent radially polarized (PCRP) beam. It is found that these singularities generally occur during free space propagation. Analytical formulae for characterizing the dynamics of the correlation singularities on propagation are derived. The influence of the spatial coherence length of the beam on the evolution properties of the correlation singularities and the conditions for creation and annihilation of the correlation singularities during propagation have been studied in detail based on the derived formulae. Some interesting results are illustrated. These correlation singularities have implication for interference experiments with a PCRP beam.
The effect of spherical aberration on the phase singularities of focused dark-hollow Gaussian beams
NASA Astrophysics Data System (ADS)
Luo, Yamei; Lü, Baida
2009-06-01
The phase singularities of focused dark-hollow Gaussian beams in the presence of spherical aberration are studied. It is shown that the evolution behavior of phase singularities of focused dark-hollow Gaussian beams in the focal region depends not only on the truncation parameter and beam order, but also on the spherical aberration. The spherical aberration leads to an asymmetric spatial distribution of singularities outside the focal plane and to a shift of singularities near the focal plane. The reorganization process of singularities and spatial distribution of singularities are additionally dependent on the sign of the spherical aberration. The results are illustrated by numerical examples.
Unidirectional spectral singularities.
Ramezani, Hamidreza; Li, Hao-Kun; Wang, Yuan; Zhang, Xiang
2014-12-31
We propose a class of spectral singularities emerging from the coincidence of two independent singularities with highly directional responses. These spectral singularities result from resonance trapping induced by the interplay between parity-time symmetry and Fano resonances. At these singularities, while the system is reciprocal in terms of a finite transmission, a simultaneous infinite reflection from one side and zero reflection from the opposite side can be realized.
Understanding Singular Vectors
ERIC Educational Resources Information Center
James, David; Botteron, Cynthia
2013-01-01
matrix yields a surprisingly simple, heuristical approximation to its singular vectors. There are correspondingly good approximations to the singular values. Such rules of thumb provide an intuitive interpretation of the singular vectors that helps explain why the SVD is so…
Hetonic quartets in a two-layer quasi-geostrophic flow: V-states and stability
NASA Astrophysics Data System (ADS)
Reinaud, J. N.; Sokolovskiy, M. A.; Carton, X.
2018-05-01
We investigate families of finite core vortex quartets in mutual equilibrium in a two-layer quasi-geostrophic flow. The finite core solutions stem from known solutions for discrete (singular) vortex quartets. Two vortices lie in the top layer and two vortices lie in the bottom layer. Two vortices have a positive potential vorticity anomaly, while the two others have negative potential vorticity anomaly. The vortex configurations are therefore related to the baroclinic dipoles known in the literature as hetons. Two main branches of solutions exist depending on the arrangement of the vortices: the translating zigzag-shaped hetonic quartets and the rotating zigzag-shaped hetonic quartets. By addressing their linear stability, we show that while the rotating quartets can be unstable over a large range of the parameter space, most translating quartets are stable. This has implications on the longevity of such vortex equilibria in the oceans.
Hybrid method for moving interface problems with application to the Hele-Shaw flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, T.Y.; Li, Zhilin; Osher, S.
In this paper, a hybrid approach which combines the immersed interface method with the level set approach is presented. The fast version of the immersed interface method is used to solve the differential equations whose solutions and their derivatives may be discontinuous across the interfaces due to the discontinuity of the coefficients or/and singular sources along the interfaces. The moving interfaces then are updated using the newly developed fast level set formulation which involves computation only inside some small tubes containing the interfaces. This method combines the advantage of the two approaches and gives a second-order Eulerian discretization for interfacemore » problems. Several key steps in the implementation are addressed in detail. This new approach is then applied to Hele-Shaw flow, an unstable flow involving two fluids with very different viscosity. 40 refs., 10 figs., 3 tabs.« less
Steganography in arrhythmic electrocardiogram signal.
Edward Jero, S; Ramu, Palaniappan; Ramakrishnan, S
2015-08-01
Security and privacy of patient data is a vital requirement during exchange/storage of medical information over communication network. Steganography method hides patient data into a cover signal to prevent unauthenticated accesses during data transfer. This study evaluates the performance of ECG steganography to ensure secured transmission of patient data where an abnormal ECG signal is used as cover signal. The novelty of this work is to hide patient data into two dimensional matrix of an abnormal ECG signal using Discrete Wavelet Transform and Singular Value Decomposition based steganography method. A 2D ECG is constructed according to Tompkins QRS detection algorithm. The missed R peaks are computed using RR interval during 2D conversion. The abnormal ECG signals are obtained from the MIT-BIH arrhythmia database. Metrics such as Peak Signal to Noise Ratio, Percentage Residual Difference, Kullback-Leibler distance and Bit Error Rate are used to evaluate the performance of the proposed approach.
Robust image watermarking using DWT and SVD for copyright protection
NASA Astrophysics Data System (ADS)
Harjito, Bambang; Suryani, Esti
2017-02-01
The Objective of this paper is proposed a robust combined Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD). The RGB image is called a cover medium, and watermark image is converted into gray scale. Then, they are transformed using DWT so that they can be split into several subbands, namely sub-band LL2, LH2, HL2. The watermark image embeds into the cover medium on sub-band LL2. This scheme aims to obtain the higher robustness level than the previous method which performs of SVD matrix factorization image for copyright protection. The experiment results show that the proposed method has robustness against several image processing attacks such as Gaussian, Poisson and Salt and Pepper Noise. In these attacks, noise has average Normalized Correlation (NC) values of 0.574863 0.889784, 0.889782 respectively. The watermark image can be detected and extracted.
Polymer quantization, stability and higher-order time derivative terms
NASA Astrophysics Data System (ADS)
Cumsille, Patricio; Reyes, Carlos M.; Ossandon, Sebastian; Reyes, Camilo
2016-03-01
The possibility that fundamental discreteness implicit in a quantum gravity theory may act as a natural regulator for ultraviolet singularities arising in quantum field theory has been intensively studied. Here, along the same expectations, we investigate whether a nonstandard representation called polymer representation can smooth away the large amount of negative energy that afflicts the Hamiltonians of higher-order time derivative theories, rendering the theory unstable when interactions come into play. We focus on the fourth-order Pais-Uhlenbeck model which can be reexpressed as the sum of two decoupled harmonic oscillators one producing positive energy and the other negative energy. As expected, the Schrödinger quantization of such model leads to the stability problem or to negative norm states called ghosts. Within the framework of polymer quantization we show the existence of new regions where the Hamiltonian can be defined well bounded from below.
Description and evaluation of an interference assessment for a slotted-wall wind tunnel
NASA Technical Reports Server (NTRS)
Kemp, William B., Jr.
1991-01-01
A wind-tunnel interference assessment method applicable to test sections with discrete finite-length wall slots is described. The method is based on high order panel method technology and uses mixed boundary conditions to satisfy both the tunnel geometry and wall pressure distributions measured in the slotted-wall region. Both the test model and its sting support system are represented by distributed singularities. The method yields interference corrections to the model test data as well as surveys through the interference field at arbitrary locations. These results include the equivalent of tunnel Mach calibration, longitudinal pressure gradient, tunnel flow angularity, wall interference, and an inviscid form of sting interference. Alternative results which omit the direct contribution of the sting are also produced. The method was applied to the National Transonic Facility at NASA Langley Research Center for both tunnel calibration tests and tests of two models of subsonic transport configurations.
NASA Astrophysics Data System (ADS)
Chen, Wen; Wang, Fajie
Based on the implicit calculus equation modeling approach, this paper proposes a speculative concept of the potential and wave operators on negative dimensionality. Unlike the standard partial differential equation (PDE) modeling, the implicit calculus modeling approach does not require the explicit expression of the PDE governing equation. Instead the fundamental solution of physical problem is used to implicitly define the differential operator and to implement simulation in conjunction with the appropriate boundary conditions. In this study, we conjecture an extension of the fundamental solution of the standard Laplace and Helmholtz equations to negative dimensionality. And then by using the singular boundary method, a recent boundary discretization technique, we investigate the potential and wave problems using the fundamental solution on negative dimensionality. Numerical experiments reveal that the physics behaviors on negative dimensionality may differ on positive dimensionality. This speculative study might open an unexplored territory in research.
Kwon, Yea-Hoon; Shin, Sae-Byuk; Kim, Shin-Dug
2018-04-30
The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin response (GSR) signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.
Efficient convolutional sparse coding
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
Multithreaded implicitly dealiased convolutions
NASA Astrophysics Data System (ADS)
Roberts, Malcolm; Bowman, John C.
2018-03-01
Implicit dealiasing is a method for computing in-place linear convolutions via fast Fourier transforms that decouples work memory from input data. It offers easier memory management and, for long one-dimensional input sequences, greater efficiency than conventional zero-padding. Furthermore, for convolutions of multidimensional data, the segregation of data and work buffers can be exploited to reduce memory usage and execution time significantly. This is accomplished by processing and discarding data as it is generated, allowing work memory to be reused, for greater data locality and performance. A multithreaded implementation of implicit dealiasing that accepts an arbitrary number of input and output vectors and a general multiplication operator is presented, along with an improved one-dimensional Hermitian convolution that avoids the loop dependency inherent in previous work. An alternate data format that can accommodate a Nyquist mode and enhance cache efficiency is also proposed.
Tachyon field in loop quantum cosmology: An example of traversable singularity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Lifang; Zhu Jianyang
2009-06-15
Loop quantum cosmology (LQC) predicts a nonsingular evolution of the universe through a bounce in the high energy region. But LQC has an ambiguity about the quantization scheme. Recently, the authors in [Phys. Rev. D 77, 124008 (2008)] proposed a new quantization scheme. Similar to others, this new quantization scheme also replaces the big bang singularity with the quantum bounce. More interestingly, it introduces a quantum singularity, which is traversable. We investigate this novel dynamics quantitatively with a tachyon scalar field, which gives us a concrete example. Our result shows that our universe can evolve through the quantum singularity regularly,more » which is different from the classical big bang singularity. So this singularity is only a weak singularity.« less
Detecting atrial fibrillation by deep convolutional neural networks.
Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui
2018-02-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
High order spectral difference lattice Boltzmann method for incompressible hydrodynamics
NASA Astrophysics Data System (ADS)
Li, Weidong
2017-09-01
This work presents a lattice Boltzmann equation (LBE) based high order spectral difference method for incompressible flows. In the present method, the spectral difference (SD) method is adopted to discretize the convection and collision term of the LBE to obtain high order (≥3) accuracy. Because the SD scheme represents the solution as cell local polynomials and the solution polynomials have good tensor-product property, the present spectral difference lattice Boltzmann method (SD-LBM) can be implemented on arbitrary unstructured quadrilateral meshes for effective and efficient treatment of complex geometries. Thanks to only first oder PDEs involved in the LBE, no special techniques, such as hybridizable discontinuous Galerkin method (HDG), local discontinuous Galerkin method (LDG) and so on, are needed to discrete diffusion term, and thus, it simplifies the algorithm and implementation of the high order spectral difference method for simulating viscous flows. The proposed SD-LBM is validated with four incompressible flow benchmarks in two-dimensions: (a) the Poiseuille flow driven by a constant body force; (b) the lid-driven cavity flow without singularity at the two top corners-Burggraf flow; and (c) the unsteady Taylor-Green vortex flow; (d) the Blasius boundary-layer flow past a flat plate. Computational results are compared with analytical solutions of these cases and convergence studies of these cases are also given. The designed accuracy of the proposed SD-LBM is clearly verified.
PREFACE: Physics and Mathematics of Nonlinear Phenomena 2013 (PMNP2013)
NASA Astrophysics Data System (ADS)
Konopelchenko, B. G.; Landolfi, G.; Martina, L.; Vitolo, R.
2014-03-01
Modern theory of nonlinear integrable equations is nowdays an important and effective tool of study for numerous nonlinear phenomena in various branches of physics from hydrodynamics and optics to quantum filed theory and gravity. It includes the study of nonlinear partial differential and discrete equations, regular and singular behaviour of their solutions, Hamitonian and bi- Hamitonian structures, their symmetries, associated deformations of algebraic and geometrical structures with applications to various models in physics and mathematics. The PMNP 2013 conference focused on recent advances and developments in Continuous and discrete, classical and quantum integrable systems Hamiltonian, critical and geometric structures of nonlinear integrable equations Integrable systems in quantum field theory and matrix models Models of nonlinear phenomena in physics Applications of nonlinear integrable systems in physics The Scientific Committee of the conference was formed by Francesco Calogero (University of Rome `La Sapienza', Italy) Boris A Dubrovin (SISSA, Italy) Yuji Kodama (Ohio State University, USA) Franco Magri (University of Milan `Bicocca', Italy) Vladimir E Zakharov (University of Arizona, USA, and Landau Institute for Theoretical Physics, Russia) The Organizing Committee: Boris G Konopelchenko, Giulio Landolfi, Luigi Martina, Department of Mathematics and Physics `E De Giorgi' and the Istituto Nazionale di Fisica Nucleare, and Raffaele Vitolo, Department of Mathematics and Physics `E De Giorgi'. A list of sponsors, speakers, talks, participants and the conference photograph are given in the PDF. Conference photograph
NASA Astrophysics Data System (ADS)
Melazzi, D.; Curreli, D.; Manente, M.; Carlsson, J.; Pavarin, D.
2012-06-01
We present SPIREs (plaSma Padova Inhomogeneous Radial Electromagnetic solver), a Finite-Difference Frequency-Domain (FDFD) electromagnetic solver in one dimension for the rapid calculation of the electromagnetic fields and the deposited power of a large variety of cylindrical plasma problems. The two Maxwell wave equations have been discretized using a staggered Yee mesh along the radial direction of the cylinder, and Fourier transformed along the other two dimensions and in time. By means of this kind of discretization, we have found that mode-coupling of fast and slow branches can be fully resolved without singularity issues that flawed other well-established methods in the past. Fields are forced by an antenna placed at a given distance from the plasma. The plasma can be inhomogeneous, finite-temperature, collisional, magnetized and multi-species. Finite-temperature Maxwellian effects, comprising Landau and cyclotron damping, have been included by means of the plasma Z dispersion function. Finite Larmor radius effects have been neglected. Radial variations of the plasma parameters are taken into account, thus extending the range of applications to a large variety of inhomogeneous plasma systems. The method proved to be fast and reliable, with accuracy depending on the spatial grid size. Two physical examples are reported: fields in a forced vacuum waveguide with the antenna inside, and forced plasma oscillations in the helicon radiofrequency range.
Implicitly solving phase appearance and disappearance problems using two-fluid six-equation model
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-01-25
Phase appearance and disappearance issue presents serious numerical challenges in two-phase flow simulations using the two-fluid six-equation model. Numerical challenges arise from the singular equation system when one phase is absent, as well as from the discontinuity in the solution space when one phase appears or disappears. In this work, a high-resolution spatial discretization scheme on staggered grids and fully implicit methods were applied for the simulation of two-phase flow problems using the two-fluid six-equation model. A Jacobian-free Newton-Krylov (JFNK) method was used to solve the discretized nonlinear problem. An improved numerical treatment was proposed and proved to be effectivemore » to handle the numerical challenges. The treatment scheme is conceptually simple, easy to implement, and does not require explicit truncations on solutions, which is essential to conserve mass and energy. Various types of phase appearance and disappearance problems relevant to thermal-hydraulics analysis have been investigated, including a sedimentation problem, an oscillating manometer problem, a non-condensable gas injection problem, a single-phase flow with heat addition problem and a subcooled flow boiling problem. Successful simulations of these problems demonstrate the capability and robustness of the proposed numerical methods and numerical treatments. As a result, volume fraction of the absent phase can be calculated effectively as zero.« less
Singularities in loop quantum cosmology.
Cailleteau, Thomas; Cardoso, Antonio; Vandersloot, Kevin; Wands, David
2008-12-19
We show that simple scalar field models can give rise to curvature singularities in the effective Friedmann dynamics of loop quantum cosmology (LQC). We find singular solutions for spatially flat Friedmann-Robertson-Walker cosmologies with a canonical scalar field and a negative exponential potential, or with a phantom scalar field and a positive potential. While LQC avoids big bang or big rip type singularities, we find sudden singularities where the Hubble rate is bounded, but the Ricci curvature scalar diverges. We conclude that the effective equations of LQC are not in themselves sufficient to avoid the occurrence of curvature singularities.
NASA Technical Reports Server (NTRS)
Sidi, A.; Israeli, M.
1986-01-01
High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.
Off-resonance artifacts correction with convolution in k-space (ORACLE).
Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne
2012-06-01
Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.
Urtnasan, Erdenebayar; Park, Jong-Uk; Joo, Eun-Yeon; Lee, Kyoung-Joung
2018-04-23
In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F 1 -score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.
Convolutional virtual electric field for image segmentation using active contours.
Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden
2014-01-01
Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images.
NASA Technical Reports Server (NTRS)
Doland, G. D.
1970-01-01
Convolutional coding, used to upgrade digital data transmission under adverse signal conditions, has been improved by a method which ensures data transitions, permitting bit synchronizer operation at lower signal levels. Method also increases decoding ability by removing ambiguous condition.
Design of convolutional tornado code
NASA Astrophysics Data System (ADS)
Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu
2017-09-01
As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.
1992-12-01
views expressed in this thesis are those of the author end do net reflect olicsia policy or pokletsm of the Deperteaset of Defame or the US...utempl u v= cncd (2,1,6,G64,u,zeros(l,12));%Convolutional encoding mm=bm(2,v); %Binary to M-ary conversion clear v u; mm=inter(50,200,mm);%Interleaving (50...save result err B. CNCD.X (CONVOLUTIONAL ENCODER FUNCTION) function (v,vr] - cncd (n,k,m,Gr,u,r) % CONVOLUTIONAL ENCODER % Paul H. Moose % Naval
Time history solution program, L225 (TEV126). Volume 1: Engineering and usage
NASA Technical Reports Server (NTRS)
Kroll, R. I.; Tornallyay, A.; Clemmons, R. E.
1979-01-01
Volume 1 of a two volume document is presented. The usage of the convolution program L225 (TEV 126) is described. The program calculates the time response of a linear system by convoluting the impulsive response function with the time-dependent excitation function. The convolution is performed as a multiplication in the frequency domain. Fast Fourier transform techniques are used to transform the product back into the time domain to obtain response time histories. A brief description of the analysis used is presented.
Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.
Singularity analysis: theory and further developments
NASA Astrophysics Data System (ADS)
Cheng, Qiuming
2015-04-01
Since the concept of singularity and local singularity analysis method (LSA) were originally proposed by the author for characterizing the nonlinear property of hydrothermal mineralization processes, the local singularity analysis technique has been successfully applied for identification of geochemical and geophysical anomalies related to various types of mineral deposits. It has also been shown that the singularity is the generic property of singular geo-processes which result in anomalous amounts of energy release or material accumulation within a narrow spatial-temporal interval. In the current paper we introduce several new developments about singularity analysis. First is a new concept of 'fractal density' which describes the singularity of complex phenomena of fractal nature. While the ordinary density possesses a unit of ratio of mass and volume (e.g. g/cm3, kg/m3) or ratio of energy over volume or time (e.g. J/cm3, w/L3, w/s), the fractal density has a unit of ratio of mass over fractal set or energy over fractal set (e.g. g/cmα, kg/mα, J/ mα, w/Lα, where α can be a non-integer). For the matter with fractal density (a non-integer α), the ordinary density of the phenomena (mass or energy) no longer exists and depicts singularity. We demonstrate that most of extreme geo-processes occurred in the earth crust originated from cascade earth dynamics (mental convection, plate tectonics, orogeny and weathering etc) may cause fractal density of mass accumulation or energy release. The examples to be used to demonstrate the concepts of fractal density and singularity are earthquakes, floods, volcanos, hurricanes, heat flow over oceanic ridge, hydrothermal mineralization in orogenic belt, and anomalies in regolith over mine caused by ore and toxic elements vertical migration. Other developments of singularity theory and methodologies including singular Kriging and singularity weights of evidence model for information integration will also be introduced.
A Generalized Method of Image Analysis from an Intercorrelation Matrix which May Be Singular.
ERIC Educational Resources Information Center
Yanai, Haruo; Mukherjee, Bishwa Nath
1987-01-01
This generalized image analysis method is applicable to singular and non-singular correlation matrices (CMs). Using the orthogonal projector and a weaker generalized inverse matrix, image and anti-image covariance matrices can be derived from a singular CM. (SLD)
Simulation of ICD-9 to ICD-10-CM Transition for Family Medicine: Simple or Convoluted?
Grief, Samuel N; Patel, Jesal; Kochendorfer, Karl M; Green, Lee A; Lussier, Yves A; Li, Jianrong; Burton, Michael; Boyd, Andrew D
2016-01-01
The objective of this study was to examine the impact of the transition from International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM), to Interactional Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM), on family medicine and to identify areas where additional training might be required. Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million in claims). Using the science of networks, we evaluated each ICD-9-CM code used by family medicine physicians to determine whether the transition was simple or convoluted. A simple transition is defined as 1 ICD-9-CM code mapping to 1 ICD-10-CM code, or 1 ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is nonreciprocal and complex, with multiple codes for which definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Of the 1635 diagnosis codes used by family medicine physicians, 70% of the codes were categorized as simple, 27% of codes were convoluted, and 3% had no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims was similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only <0.1% of the overall diagnosis codes. The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, and for which additional resources need to be invested to ensure a successful transition to ICD-10-CM. © Copyright 2016 by the American Board of Family Medicine.
Simulation of ICD-9 to ICD-10-CM transition for family medicine: simple or convoluted?
Grief, Samuel N.; Patel, Jesal; Lussier, Yves A.; Li, Jianrong; Burton, Michael; Boyd, Andrew D.
2017-01-01
Objectives The objective of this study was to examine the impact of the transition from International Classification of Disease Version Nine Clinical Modification (ICD-9-CM) to Interactional Classification of Disease Version Ten Clinical Modification (ICD-10-CM) on family medicine and identify areas where additional training might be required. Methods Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million dollars in claims). Using the science of networks we evaluated each ICD-9-CM code used by family medicine physicians to determine if the transition was simple or convoluted.1 A simple translation is defined as one ICD-9-CM code mapping to one ICD-10-CM code or one ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is non-reciprocal and complex with multiple codes where definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Results Of the 1635 diagnosis codes used by the family medicine physicians, 70% of the codes were categorized as simple, 27% of the diagnosis codes were convoluted and 3% were found to have no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims were similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only < 0.1% of the overall diagnosis codes. Conclusions The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, where additional resources need to be invested to ensure a successful transition to ICD-10-CM. PMID:26769875
Li, Lifeng
2012-04-01
I extend a previous work [J. Opt. Soc. Am. A, 738 (2011)] on field singularities at lossless metal-dielectric right-angle edges and their ramifications to the numerical modeling of gratings to the case of arbitrary metallic wedge angles. Simple criteria are given that allow one knowing the lossless permittivities and the arbitrary wedge angles to determine if the electric field at the edges is nonsingular, can be regularly singular, or can be irregularly singular without calculating the singularity exponent. Furthermore, the knowledge of the singularity type enables one to predict immediately if a numerical method that uses Fourier expansions of the transverse electric field components at the edges will converge or not without making any numerical tests. All conclusions of the previous work about the general relationships between field singularities, Fourier representation of singular fields, and convergence of numerical methods for modeling lossless metal-dielectric gratings have been reconfirmed.
Elasticity solutions for a class of composite laminate problems with stress singularities
NASA Technical Reports Server (NTRS)
Wang, S. S.
1983-01-01
A study on the fundamental mechanics of fiber-reinforced composite laminates with stress singularities is presented. Based on the theory of anisotropic elasticity and Lekhnitskii's complex-variable stress potentials, a system of coupled governing partial differential equations are established. An eigenfunction expansion method is introduced to determine the orders of stress singularities in composite laminates with various geometric configurations and material systems. Complete elasticity solutions are obtained for this class of singular composite laminate mechanics problems. Homogeneous solutions in eigenfunction series and particular solutions in polynomials are presented for several cases of interest. Three examples are given to illustrate the method of approach and the basic nature of the singular laminate elasticity solutions. The first problem is the well-known laminate free-edge stress problem, which has a rather weak stress singularity. The second problem is the important composite delamination problem, which has a strong crack-tip stress singularity. The third problem is the commonly encountered bonded composite joints, which has a complex solution structure with moderate orders of stress singularities.
Future singularity avoidance in phantom dark energy models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haro, Jaume de, E-mail: jaime.haro@upc.edu
2012-07-01
Different approaches to quantum cosmology are studied in order to deal with the future singularity avoidance problem. Our results show that these future singularities will persist but could take different forms. As an example we have studied the big rip which appear when one considers the state equation P = ωρ with ω < −1, showing that it does not disappear in modified gravity. On the other hand, it is well-known that quantum geometric effects (holonomy corrections) in loop quantum cosmology introduce a quadratic modification, namely proportional to ρ{sup 2}, in Friedmann's equation that replace the big rip by amore » non-singular bounce. However this modified Friedmann equation could have been obtained in an inconsistent way, what means that the obtained results from this equation, in particular singularity avoidance, would be incorrect. In fact, we will show that instead of a non-singular bounce, the big rip singularity would be replaced, in loop quantum cosmology, by other kind of singularity.« less
Loop quantum cosmology and singularities.
Struyve, Ward
2017-08-15
Loop quantum gravity is believed to eliminate singularities such as the big bang and big crunch singularity. This belief is based on studies of so-called loop quantum cosmology which concerns symmetry-reduced models of quantum gravity. In this paper, the problem of singularities is analysed in the context of the Bohmian formulation of loop quantum cosmology. In this formulation there is an actual metric in addition to the wave function, which evolves stochastically (rather than deterministically as the case of the particle evolution in non-relativistic Bohmian mechanics). Thus a singularity occurs whenever this actual metric is singular. It is shown that in the loop quantum cosmology for a homogeneous and isotropic Friedmann-Lemaître-Robertson-Walker space-time with arbitrary constant spatial curvature and cosmological constant, coupled to a massless homogeneous scalar field, a big bang or big crunch singularity is never obtained. This should be contrasted with the fact that in the Bohmian formulation of the Wheeler-DeWitt theory singularities may exist.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, Robert L., E-mail: rdixon@wfubmc.edu; Boone, John M.; Kraft, Robert A.
2014-11-01
Purpose: With the increasing clinical use of shift-variant CT protocols involving tube current modulation (TCM), variable pitch or pitch modulation (PM), and variable aperture a(t), the interpretation of the scanner-reported CTDI{sub vol} is called into question. This was addressed for TCM in their previous paper published by Dixon and Boone [Med. Phys. 40, 111920 (14pp.) (2013)] and is extended to PM and concurrent TCM/PM as well as variable aperture in this work. Methods: Rigorous convolution equations are derived to describe the accumulated dose distributions for TCM, PM, and concurrent TCM/PM. A comparison with scanner-reported CTDI{sub vol} formulae clearly identifies themore » source of their differences with the traditional CTDI{sub vol}. Dose distribution simulations using the convolution are provided for a variety of TCM and PM scenarios including a helical shuttle used for perfusion studies (as well as constant mA)—all having the same scanner-reported CTDI{sub vol}. These new convolution simulations for TCM are validated by comparison with their previous discrete summations. Results: These equations show that PM is equivalent to TCM if the pitch variation p(z) is proportional to 1/i(z), where i(z) is the local tube current. The simulations show that the local dose at z depends only weakly on the local tube current i(z) or local pitch p(z) due to scatter from all other locations along z, and that the “local CTDI{sub vol}(z)” or “CTDI{sub vol} per slice” do not represent a local dose but rather only a relative i(z) or p(z). The CTDI-paradigm does not apply to shift-variant techniques and the scanner-reported CTDI{sub vol} for the same lacks physical significance and relevance. Conclusions: While the traditional CTDI{sub vol} at constant tube current and pitch conveys useful information (the peak dose at the center of the scan length), CTDI{sub vol} for shift-variant techniques (TCM or PM) conveys no useful information about the associated dose distribution it purportedly represents. On the other hand, the total energy absorbed E (“integral dose”) as well as its surrogate DLP remain robust (invariant) with respect to shift-variance, depending only on the total mAs = 〈i〉t{sub 0} accumulated during the total beam-on time t{sub 0} and aperture a, where 〈i〉 is the average current.« less
Reduced-rank approximations to the far-field transform in the gridded fast multipole method
NASA Astrophysics Data System (ADS)
Hesford, Andrew J.; Waag, Robert C.
2011-05-01
The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.
Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method.
Hesford, Andrew J; Waag, Robert C
2011-05-10
The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.
Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method
Hesford, Andrew J.; Waag, Robert C.
2011-01-01
The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly. PMID:21552350
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
Brief summaries of research in the following areas are presented: (1) construction of optimum geometrically uniform trellis codes; (2) a statistical approach to constructing convolutional code generators; and (3) calculating the exact performance of a convolutional code.
a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data
NASA Astrophysics Data System (ADS)
Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.
2018-04-01
Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.
Detection of prostate cancer on multiparametric MRI
NASA Astrophysics Data System (ADS)
Seah, Jarrel C. Y.; Tang, Jennifer S. N.; Kitchen, Andy
2017-03-01
In this manuscript, we describe our approach and methods to the ProstateX challenge, which achieved an overall AUC of 0.84 and the runner-up position. We train a deep convolutional neural network to classify lesions marked on multiparametric MRI of the prostate as clinically significant or not. We implement a novel addition to the standard convolutional architecture described as auto-windowing which is clinically inspired and designed to overcome some of the difficulties faced in MRI interpretation, where high dynamic ranges and low contrast edges may cause difficulty for traditional convolutional neural networks trained on high contrast natural imagery. We demonstrate that this system can be trained end to end and outperforms a similar architecture without such additions. Although a relatively small training set was provided, we use extensive data augmentation to prevent overfitting and transfer learning to improve convergence speed, showing that deep convolutional neural networks can be feasibly trained on small datasets.
No-reference image quality assessment based on statistics of convolution feature maps
NASA Astrophysics Data System (ADS)
Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo
2018-04-01
We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.
Sensitivity Kernels for the Cross-Convolution Measure: Eliminate the Source in Waveform Tomography
NASA Astrophysics Data System (ADS)
Menke, W. H.
2017-12-01
We use the adjoint method to derive sensitivity kernels for the cross-convolution measure, a goodness-of-fit criterion that is applicable to seismic data containing closely-spaced multiple arrivals, such as reverberating compressional waves and split shear waves. In addition to a general formulation, specific expressions for sensitivity with respect to density, Lamé parameter and shear modulus are derived for a isotropic elastic solid. As is typical of adjoint methods, the kernels depend upon an adjoint field, the source of which, in this case, is the reference displacement field, pre-multiplied by a matrix of cross-correlations of components of the observed field. We use a numerical simulation to evaluate the resolving power of a topographic inversion that employs the cross-convolution measure. The estimated resolving kernel shows is point-like, indicating that the cross-convolution measure will perform well in waveform tomography settings.
New singularities in unexpected places
NASA Astrophysics Data System (ADS)
Barrow, John D.; Graham, Alexander A. H.
2015-09-01
Spacetime singularities have been discovered which are physically much weaker than those predicted by the classical singularity theorems. Geodesics evolve through them and they only display infinities in the derivatives of their curvature invariants. So far, these singularities have appeared to require rather exotic and unphysical matter for their occurrence. Here, we show that a large class of singularities of this form can be found in a simple Friedmann cosmology containing only a scalar-field with a power-law self-interaction potential. Their existence challenges several preconceived ideas about the nature of spacetime singularities and has an impact upon the end of inflation in the early universe.
Exotic singularities and spatially curved loop quantum cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Parampreet; Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, Ontario N2L 2Y5; Vidotto, Francesca
2011-03-15
We investigate the occurrence of various exotic spacelike singularities in the past and the future evolution of k={+-}1 Friedmann-Robertson-Walker model and loop quantum cosmology using a sufficiently general phenomenological model for the equation of state. We highlight the nontrivial role played by the intrinsic curvature for these singularities and the new physics which emerges at the Planck scale. We show that quantum gravity effects generically resolve all strong curvature singularities including big rip and big freeze singularities. The weak singularities, which include sudden and big brake singularities, are ignored by quantum gravity when spatial curvature is negative, as was previouslymore » found for the spatially flat model. Interestingly, for the spatially closed model there exist cases where weak singularities may be resolved when they occur in the past evolution. The spatially closed model exhibits another novel feature. For a particular class of equation of state, this model also exhibits an additional physical branch in loop quantum cosmology, a baby universe separated from the parent branch. Our analysis generalizes previous results obtained on the resolution of strong curvature singularities in flat models to isotropic spacetimes with nonzero spatial curvature.« less
Singular spectrum and singular entropy used in signal processing of NC table
NASA Astrophysics Data System (ADS)
Wang, Linhong; He, Yiwen
2011-12-01
NC (numerical control) table is a complex dynamic system. The dynamic characteristics caused by backlash, friction and elastic deformation among each component are so complex that they have become the bottleneck of enhancing the positioning accuracy, tracking accuracy and dynamic behavior of NC table. This paper collects vibration acceleration signals from NC table, analyzes the signals with SVD (singular value decomposition) method, acquires the singular spectrum and calculates the singular entropy of the signals. The signal characteristics and their regulations of NC table are revealed via the characteristic quantities such as singular spectrum, singular entropy etc. The steep degrees of singular spectrums can be used to discriminate complex degrees of signals. The results show that the signals in direction of driving axes are the simplest and the signals in perpendicular direction are the most complex. The singular entropy values can be used to study the indetermination of signals. The results show that the signals of NC table are not simple signal nor white noise, the entropy values in direction of driving axe are lower, the entropy values increase along with the increment of driving speed and the entropy values at the abnormal working conditions such as resonance or creeping etc decrease obviously.
Continuations of the nonlinear Schrödinger equation beyond the singularity
NASA Astrophysics Data System (ADS)
Fibich, G.; Klein, M.
2011-07-01
We present four continuations of the critical nonlinear Schrödinger equation (NLS) beyond the singularity: (1) a sub-threshold power continuation, (2) a shrinking-hole continuation for ring-type solutions, (3) a vanishing nonlinear-damping continuation and (4) a complex Ginzburg-Landau (CGL) continuation. Using asymptotic analysis, we explicitly calculate the limiting solutions beyond the singularity. These calculations show that for generic initial data that lead to a loglog collapse, the sub-threshold power limit is a Bourgain-Wang solution, both before and after the singularity, and the vanishing nonlinear-damping and CGL limits are a loglog solution before the singularity, and have an infinite-velocity expanding core after the singularity. Our results suggest that all NLS continuations share the universal feature that after the singularity time Tc, the phase of the singular core is only determined up to multiplication by eiθ. As a result, interactions between post-collapse beams (filaments) become chaotic. We also show that when the continuation model leads to a point singularity and preserves the NLS invariance under the transformation t → -t and ψ → ψ*, the singular core of the weak solution is symmetric with respect to Tc. Therefore, the sub-threshold power and the shrinking-hole continuations are symmetric with respect to Tc, but continuations which are based on perturbations of the NLS equation are generically asymmetric.
Dynamical singularities for complex initial conditions and the motion at a real separatrix.
Shnerb, Tamar; Kay, K G
2006-04-01
This work investigates singularities occurring at finite real times in the classical dynamics of one-dimensional double-well systems with complex initial conditions. The objective is to understand the relationship between these singularities and the behavior of the systems for real initial conditions. An analytical treatment establishes that the dynamics of a quartic double well system possesses a doubly infinite sequence of singularities. These are associated with initial conditions that converge to those for the real separatrix as the singularity time becomes infinite. This confluence of singularities is shown to lead to the unstable behavior that characterizes the real motion at the separatrix. Numerical calculations confirm the existence of a large number of singularities converging to the separatrix for this and two additional double-well systems. The approach of singularities to the real axis is of particular interest since such behavior has been related to the formation of chaos in nonintegrable systems. The properties of the singular trajectories which cause this convergence to the separatrix are identified. The hyperbolic fixed point corresponding to the potential energy maximum, responsible for the characteristic motion at a separatrix, also plays a critical role in the formation of the complex singularities by delaying trajectories and then deflecting them into asymptotic regions of space from where they are directly repelled to infinity in a finite time.
NASA Astrophysics Data System (ADS)
Tzanis, Andreas
2013-04-01
The Ground Probing Radar (GPR) has become a valuable means of exploring thin and shallow structures for geological, geotechnical, engineering, environmental, archaeological and other work. GPR images usually contain geometric (orientation/dip-dependent) information from point scatterers (e.g. diffraction hyperbolae), dipping reflectors (geological bedding, structural interfaces, cracks, fractures and joints) and other conceivable structural configurations. In geological, geotechnical and engineering applications, one of the most significant objectives is the detection of fractures, inclined interfaces and empty or filled cavities frequently associated with jointing/faulting. These types of target, especially fractures, are usually not good reflectors and are spatially localized. Their scale is therefore a factor significantly affecting their detectability. At the same time, the GPR method is notoriously susceptible to noise. Quite frequently, extraneous (natural or anthropogenic) interference and systemic noise swamp the data with unusable information that obscures, or even conceals the reflections from such targets. In many cases, the noise has definite directional characteristics (e.g. clutter). Raw GPR data require post-acquisition processing, as they usually provide only approximate target shapes and distances (depths). The purpose of this paper is to investigate the Curvelet Transform (CT) as a means of S/N enhancement and information retrieval from 2-D GPR sections (B-scans), with particular emphasis placed on the problem of recovering features associated with specific temporal or spatial scales and geometry (orientation/dip). The CT is a multiscale and multidirectional expansion that formulates a sparse representation of the input data set (Candès and Donoho, 2003a, 2003b, 2004; Candés et al., 2006). A signal representation is sparse when it describes the signal as a superposition of a small number of components. What makes the CT appropriate for processing GPR data is its capability to describe wavefronts. The roots of the CT are traced to the field of Harmonic Analysis, where curvelets were introduced as expansions for asymptotic solutions of wave equations (Smith, 1998; Candès, 1999). In consequence, curvelets can be viewed as primitive and prototype waveforms - they are local in both space and spatial frequency and correspond to a partitioning of the 2D Fourier plane by highly anisotropic elements (for the high frequencies) that obey the parabolic scaling principle, that their width is proportional to the square of their length (Smith, 1998). The GPR data essentially comprise recordings of the amplitudes of transient waves generated and recorded by source and receiver antennae, with each source/receiver pair generating a data trace that is a function of time. An ensemble of traces collected sequentially along a scan line, i.e. a GPR section or B-scan, provides a spatio-temporal sampling of the wavefield which contains different arrivals that correspond to different interactions with wave scatterers (inhomogeneities) in the subsurface. All these arrivals represent wavefronts that are relatively smooth in their longitudinal direction and oscillatory in their transverse direction. The connection between Harmonic Analysis and curvelets has resulted in important nonlinear approximations of functions with intermittent regularity (Candès and Donoho, 2004). Such functions are assumed to be piecewise smooth with singularities, i.e. regions where the derivative diverges. In the subsurface, these singularities correspond to geological inhomogeneities, at the boundaries of which waves reflect. In GPR data, these singularities correspond to wavefronts. Owing to their anisotropic shape, curvelets are well adapted to detect wavefronts at different angles and scales because aligned curvelets of a given scale, locally correlate with wavefronts of the same scale. The CT can also be viewed as a higher dimensional extension of the wavelet transform: whereas discrete wavelets are designed to provide sparse representations of functions with point singularities, curvelets are designed to provide sparse representations of functions with singularities on curves. This work investigates the utility of the CT in processing noisy GPR data from geotechnical and archaeometric surveys. The analysis has been performed with the Fast Discrete CT (FDCT - Candès et al., 2006) available from http://www.curvelet.org/ and adapted for use by the matGPR software (Tzanis, 2010). The adaptation comprises a set of driver functions that compute and display the curvelet decomposition of the input GPR section and then allow for the interactive exclusion/inclusion of data (wavefront) components at different scales and angles by cancelation/restoration of the corresponding curvelet coefficients. In this way it is possible to selectively reconstruct the data so as to abstract/retain information of given scales and orientations. It is demonstrated that the CT can be used to: (a) Enhance the S/N ratio by cancelling directional noise wavefronts of any angle of emergence, with particular reference to clutter. (b) Extract geometric information for further scrutiny, e.g. distinguish signals from small and large aperture fractures, faults, bedding etc. (c) Investigate the characteristics of signal propagation (hence material properties), albeit indirectly. This is possible because signal attenuation and temporal localization are closely associated, so that scale and spatio-temporal localization are also closely related. Thus, interfaces embedded in low attenuation domains will tend to produce sharp reflections rich in high frequencies and fine-scale localization. Conversely, interfaces in high attenuation domains will tend to produce dull reflections rich in low frequencies and broad localization. At a single scale and with respect to points (a) and (b) above, the results of the CT processor are comparable to those of the tuneable directional wavelet filtering scheme proposed by Tzanis (2013). With respect to point (c), the tuneable directional filtering appears to be more suitable in isolating and extracting information at the lower frequency - broader scale range. References Candès, E., 1999. Harmonic analysis of neural networks. Appl. Comput. Harmon. Anal., 6, 197-218. Candès, E. and Donoho, D., 2003a. Continuous curvelet transform: I. Resolution of the wavefront set. Appl. Comput. Harmon. Anal., 19, 162-197. Candès, E. and Donoho, D., 2003b. Continuous curvelet transform: II. Discretization and frames. Appl. Comput. Harmon. Anal., 19, 198-222. Candès, E. and Donoho, D., 2004. New tight frames of curvelets and optimal representations of objects with piecewise C2 singularities. Comm. Pure Appl. Math., 57, 219-266. Candès, E. J., L. Demanet, D. L. Donoho, and L. Ying, 2006. Fast discrete curvelet transforms (FDCT). Multiscale Modeling and Simulation, 5, 861-899. Smith, H. F., 1998. A Hardy space for Fourier integral operators. Journal of Geometric Analysis, 7, 629 - 653. Tzanis, A., 2010. matGPR Release 2: A freeware MATLAB® package for the analysis & interpretation of common and single offset GPR data. FastTimes, 15 (1), 17 - 43. Tzanis, A, 2013. Detection and extraction of orientation-and-scale-dependent information from two-dimensional GPR data with tuneable directional wavelet filters. Journal of Applied Geophysics, 89, 48-67. DOI: 10.1016/j.jappgeo.2012.11.007
Topological resolution of gauge theory singularities
NASA Astrophysics Data System (ADS)
Saracco, Fabio; Tomasiello, Alessandro; Torroba, Gonzalo
2013-08-01
Some gauge theories with Coulomb branches exhibit singularities in perturbation theory, which are usually resolved by nonperturbative physics. In string theory this corresponds to the resolution of timelike singularities near the core of orientifold planes by effects from F or M theory. We propose a new mechanism for resolving Coulomb branch singularities in three-dimensional gauge theories, based on Chern-Simons interactions. This is illustrated in a supersymmetric SU(2) Yang-Mills-Chern-Simons theory. We calculate the one-loop corrections to the Coulomb branch of this theory and find a result that interpolates smoothly between the high-energy metric (that would exhibit the singularity) and a regular singularity-free low-energy result. We suggest possible applications to singularity resolution in string theory and speculate a relationship to a similar phenomenon for the orientifold six-plane in massive IIA supergravity.
NASA Astrophysics Data System (ADS)
Liu, Wanjun; Liang, Xuejian; Qu, Haicheng
2017-11-01
Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.
7 CFR 46.1 - Words in singular form.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Words in singular form. 46.1 Section 46.1 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... Words in singular form. Words in this part in the singular form shall be deemed to import the plural...
7 CFR 61.1 - Words in singular form.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Words in singular form. 61.1 Section 61.1 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Words in singular form. Words used in the regulations in this subpart in the singular form shall be...
A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification
Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.
2015-01-01
In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Mohammadi, Vahid
2017-08-01
In this research, we investigate the numerical solution of nonlinear Schrödinger equations in two and three dimensions. The numerical meshless method which will be used here is RBF-FD technique. The main advantage of this method is the approximation of the required derivatives based on finite difference technique at each local-support domain as Ωi. At each Ωi, we require to solve a small linear system of algebraic equations with a conditionally positive definite matrix of order 1 (interpolation matrix). This scheme is efficient and its computational cost is same as the moving least squares (MLS) approximation. A challengeable issue is choosing suitable shape parameter for interpolation matrix in this way. In order to overcome this matter, an algorithm which was established by Sarra (2012), will be applied. This algorithm computes the condition number of the local interpolation matrix using the singular value decomposition (SVD) for obtaining the smallest and largest singular values of that matrix. Moreover, an explicit method based on Runge-Kutta formula of fourth-order accuracy will be applied for approximating the time variable. It also decreases the computational costs at each time step since we will not solve a nonlinear system. On the other hand, to compare RBF-FD method with another meshless technique, the moving kriging least squares (MKLS) approximation is considered for the studied model. Our results demonstrate the ability of the present approach for solving the applicable model which is investigated in the current research work.
The Friedmann-Lemaître-Robertson-Walker Big Bang Singularities are Well Behaved
NASA Astrophysics Data System (ADS)
Stoica, Ovidiu Cristinel
2016-01-01
We show that the Big Bang singularity of the Friedmann-Lemaître-Robertson-Walker model does not raise major problems to General Relativity. We prove a theorem showing that the Einstein equation can be written in a non-singular form, which allows the extension of the spacetime before the Big Bang. The physical interpretation of the fields used is discussed. These results follow from our research on singular semi-Riemannian geometry and singular General Relativity.
Analytic model of a multi-electron atom
NASA Astrophysics Data System (ADS)
Skoromnik, O. D.; Feranchuk, I. D.; Leonau, A. U.; Keitel, C. H.
2017-12-01
A fully analytical approximation for the observable characteristics of many-electron atoms is developed via a complete and orthonormal hydrogen-like basis with a single-effective charge parameter for all electrons of a given atom. The basis completeness allows us to employ the secondary-quantized representation for the construction of regular perturbation theory, which includes in a natural way correlation effects, converges fast and enables an effective calculation of the subsequent corrections. The hydrogen-like basis set provides a possibility to perform all summations over intermediate states in closed form, including both the discrete and continuous spectra. This is achieved with the help of the decomposition of the multi-particle Green function in a convolution of single-electronic Coulomb Green functions. We demonstrate that our fully analytical zeroth-order approximation describes the whole spectrum of the system, provides accuracy, which is independent of the number of electrons and is important for applications where the Thomas-Fermi model is still utilized. In addition already in second-order perturbation theory our results become comparable with those via a multi-configuration Hartree-Fock approach.
Fast animation of lightning using an adaptive mesh.
Kim, Theodore; Lin, Ming C
2007-01-01
We present a fast method for simulating, animating, and rendering lightning using adaptive grids. The "dielectric breakdown model" is an elegant algorithm for electrical pattern formation that we extend to enable animation of lightning. The simulation can be slow, particularly in 3D, because it involves solving a large Poisson problem. Losasso et al. recently proposed an octree data structure for simulating water and smoke, and we show that this discretization can be applied to the problem of lightning simulation as well. However, implementing the incomplete Cholesky conjugate gradient (ICCG) solver for this problem can be daunting, so we provide an extensive discussion of implementation issues. ICCG solvers can usually be accelerated using "Eisenstat's trick," but the trick cannot be directly applied to the adaptive case. Fortunately, we show that an "almost incomplete Cholesky" factorization can be computed so that Eisenstat's trick can still be used. We then present a fast rendering method based on convolution that is competitive with Monte Carlo ray tracing but orders of magnitude faster, and we also show how to further improve the visual results using jittering.
Application of two direct runoff prediction methods in Puerto Rico
Sepulveda, N.
1997-01-01
Two methods for predicting direct runoff from rainfall data were applied to several basins and the resulting hydrographs compared to measured values. The first method uses a geomorphology-based unit hydrograph to predict direct runoff through its convolution with the excess rainfall hyetograph. The second method shows how the resulting hydraulic routing flow equation from a kinematic wave approximation is solved using a spectral method based on the matrix representation of the spatial derivative with Chebyshev collocation and a fourth-order Runge-Kutta time discretization scheme. The calibrated Green-Ampt (GA) infiltration parameters are obtained by minimizing the sum, over several rainfall events, of absolute differences between the total excess rainfall volume computed from the GA equations and the total direct runoff volume computed from a hydrograph separation technique. The improvement made in predicting direct runoff using a geomorphology-based unit hydrograph with the ephemeral and perennial stream network instead of the strictly perennial stream network is negligible. The hydraulic routing scheme presented here is highly accurate in predicting the magnitude and time of the hydrograph peak although the much faster unit hydrograph method also yields reasonable results.
Treatment of singularities in a middle-crack tension specimen
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Raju, I. S.
1990-01-01
A three-dimensional finite-element analysis of a middle-crack tension specimen subjected to mode I loading was performed to study the stress singularity along the crack front. The specimen was modeled using 20-node isoparametric elements with collapsed nonsingular elements at the crack front. The displacements and stresses from the analysis were used to estimate the power of singularities, by a log-log regression analysis, along the crack front. Analyses showed that finite-sized cracked bodies have two singular stress fields. Because of two singular stress fields near the free surface and the classical square root singularity elsewhere, the strain energy release rate appears to be an appropriate parameter all along the crack front.
Semiclassical analysis of spectral singularities and their applications in optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mostafazadeh, Ali
2011-08-15
Motivated by possible applications of spectral singularities in optics, we develop a semiclassical method of computing spectral singularities. We use this method to examine the spectral singularities of a planar slab gain medium whose gain coefficient varies due to the exponential decay of the intensity of the pumping beam inside the medium. For both singly and doublypumped samples, we obtain universal upper bounds on the decay constant beyond which no lasing occurs. Furthermore, we show that the dependence of the wavelength of the spectral singularities on the value of the decay constant is extremely mild. This is an indication ofmore » the stability of optical spectral singularities.« less
Cusp singularities in f(R) gravity: pros and cons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Pisin; Yeom, Dong-han
We investigate cusp singularities in f(R) gravity, especially for Starobinsky and Hu-Sawicki dark energy models. We illustrate that, by using double-null numerical simulations, a cusp singularity can be triggered by gravitational collapses. This singularity can be cured by adding a quadratic term, but this causes a Ricci scalar bump that can be observed by an observer outside the event horizon. Comparing with cosmological parameters, it seems that it would be difficult to see super-Planckian effects by astrophysical experiments. On the other hand, at once there exists a cusp singularity, it can be a mechanism to realize a horizon scale curvaturemore » singularity that can be interpreted by a firewall.« less
Propagation of the Lissajous singularity dipole emergent from non-paraxial polychromatic beams
NASA Astrophysics Data System (ADS)
Haitao, Chen; Gao, Zenghui; Wang, Wanqing
2017-06-01
The propagation of the Lissajous singularity dipole (LSD) emergent from the non-paraxial polychromatic beams is studied. It is found that the handedness reversal of Lissajous singularities, the change in the shape of Lissajous figures, as well as the creation and annihilation of the LSD may take place by varying the propagation distance, off-axis parameter, wavelength, or amplitude factor. Comparing with the LSD emergent from paraxial polychromatic beams, the output field of non-paraxial polychromatic beams is more complicated, which results in some richer dynamic behaviors of Lissajous singularities, such as more Lissajous singularities and no vanishing of a single Lissajous singularity at the plane z>0.
Entangled singularity patterns of photons in Ince-Gauss modes
NASA Astrophysics Data System (ADS)
Krenn, Mario; Fickler, Robert; Huber, Marcus; Lapkiewicz, Radek; Plick, William; Ramelow, Sven; Zeilinger, Anton
2013-01-01
Photons with complex spatial mode structures open up possibilities for new fundamental high-dimensional quantum experiments and for novel quantum information tasks. Here we show entanglement of photons with complex vortex and singularity patterns called Ince-Gauss modes. In these modes, the position and number of singularities vary depending on the mode parameters. We verify two-dimensional and three-dimensional entanglement of Ince-Gauss modes. By measuring one photon and thereby defining its singularity pattern, we nonlocally steer the singularity structure of its entangled partner, while the initial singularity structure of the photons is undefined. In addition we measure an Ince-Gauss specific quantum-correlation function with possible use in future quantum communication protocols.
Classical and quantum Big Brake cosmology for scalar field and tachyonic models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamenshchik, A. Yu.; Manti, S.
We study a relation between the cosmological singularities in classical and quantum theory, comparing the classical and quantum dynamics in some models possessing the Big Brake singularity - the model based on a scalar field and two models based on a tachyon-pseudo-tachyon field . It is shown that the effect of quantum avoidance is absent for the soft singularities of the Big Brake type while it is present for the Big Bang and Big Crunch singularities. Thus, there is some kind of a classical - quantum correspondence, because soft singularities are traversable in classical cosmology, while the strong Big Bangmore » and Big Crunch singularities are not traversable.« less
Quantum healing of spacetime singularities: A review
NASA Astrophysics Data System (ADS)
Konkowski, D. A.; Helliwell, T. M.
2018-02-01
Singularities are commonplace in general relativistic spacetimes. It is natural to hope that they might be “healed” (or resolved) by the inclusion of quantum mechanics, either in the theory itself (quantum gravity) or, more modestly, in the description of the spacetime geodesic paths used to define them. We focus here on the latter, mainly using a procedure proposed by Horowitz and Marolf to test whether singularities in broad classes of spacetimes can be resolved by replacing geodesic paths with quantum wave packets. We list the spacetime singularities that various authors have studied in this context, and distinguish those which are healed quantum mechanically (QM) from those which remain singular. Finally, we mention some alternative approaches to healing singularities.
Singularities in water waves and Rayleigh-Taylor instability
NASA Technical Reports Server (NTRS)
Tanveer, S.
1991-01-01
Singularities in inviscid two-dimensional finite-amplitude water waves and inviscid Rayleigh-Taylor instability are discussed. For the deep water gravity waves of permanent form, through a combination of analytical and numerical methods, results describing the precise form, number, and location of singularities in the unphysical domain as the wave height is increased are presented. It is shown how the information on the singularity in the unphysical region has the same form as for deep water waves. However, associated with such a singularity is a series of image singularities at increasing distances from the physical plane with possibly different behavior. Furthermore, for the Rayleigh-Taylor problem of motion of fluid over a vacuum and for the unsteady water wave problem, integro-differential equations valid in the unphysical region are derived, and how these equations can give information on the nature of singularities for arbitrary initial conditions is shown.
Topological resolution of gauge theory singularities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saracco, Fabio; Tomasiello, Alessandro; Torroba, Gonzalo
2013-08-21
Some gauge theories with Coulomb branches exhibit singularities in perturbation theory, which are usually resolved by nonperturbative physics. In string theory this corresponds to the resolution of timelike singularities near the core of orientifold planes by effects from F or M theory. We propose a new mechanism for resolving Coulomb branch singularities in three-dimensional gauge theories, based on Chern-Simons interactions. This is illustrated in a supersymmetric S U ( 2 ) Yang-Mills-Chern-Simons theory. We calculate the one-loop corrections to the Coulomb branch of this theory and find a result that interpolates smoothly between the high-energy metric (that would exhibit themore » singularity) and a regular singularity-free low-energy result. We suggest possible applications to singularity resolution in string theory and speculate a relationship to a similar phenomenon for the orientifold six-plane in massive IIA supergravity.« less
Genericity Distinctions and the Interpretation of Determiners in Second Language Acquisition
ERIC Educational Resources Information Center
Ionin, Tania; Montrul, Silvina; Kim, Ji-Hye; Philippov, Vadim
2011-01-01
English uses three types of generic NPs: bare plurals ("Lions are dangerous"), definite singulars ("The lion is dangerous"), and indefinite singulars ("A lion is dangerous"). These three NP types are not interchangeable: definite singulars and bare plurals can have generic reference at the NP-level, while indefinite singulars are compatible only…
7 CFR 900.36 - Words in the singular form.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Words in the singular form. 900.36 Section 900.36 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing... Marketing Orders § 900.36 Words in the singular form. Words in this subpart in the singular form shall be...
7 CFR 900.100 - Words in the singular form.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Words in the singular form. 900.100 Section 900.100 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing... Words in the singular form. Words in this subpart in the singular form shall be deemed to import the...
7 CFR 900.1 - Words in the singular form.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Words in the singular form. 900.1 Section 900.1 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing... Words in the singular form. Words in this subpart in the singular form shall be deemed to import the...
7 CFR 900.50 - Words in the singular form.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Words in the singular form. 900.50 Section 900.50 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing... Words in the singular form. Words in this subpart in the singular form shall be deemed to import the...
7 CFR 900.20 - Words in the singular form.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Words in the singular form. 900.20 Section 900.20 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing... § 900.20 Words in the singular form. Words in this subpart in the singular form shall be deemed to...
7 CFR 1200.50 - Words in the singular form.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Words in the singular form. 1200.50 Section 1200.50 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING....50 Words in the singular form. Words in this subpart in the singular form shall be deemed to import...
Singularities in the classical Rayleigh-Taylor flow - Formation and subsequent motion
NASA Technical Reports Server (NTRS)
Tanveer, S.
1993-01-01
The creation and subsequent motion of singularities of solution to classical Rayleigh-Taylor flow (two dimensional inviscid, incompressible fluid over a vacuum) are discussed. For a specific set of initial conditions, we give analytical evidence to suggest the instantaneous formation of one or more singularities at specific points in the unphysical plane, whose locations depend sensitively on small changes in initial conditions in the physical domain. One-half power singularities are created in accordance with an earlier conjecture; however, depending on initial conditions, other forms of singularities are also possible. For a specific initial condition, we follow a numerical procedure in the unphysical plane to compute the motion of a one-half singularity. This computation confirms our previous conjecture that the approach of a one-half singularity towards the physical domain corresponds to the development of a spike at the physical interface. Under some assumptions that appear to be consistent with numerical calculations, we present analytical evidence to suggest that a singularity of the one-half type cannot impinge the physical domain in finite time.
Singularities in the classical Rayleigh-Taylor flow: Formation and subsequent motion
NASA Technical Reports Server (NTRS)
Tanveer, S.
1992-01-01
The creation and subsequent motion of singularities of solution to classical Rayleigh-Taylor flow (two dimensional inviscid, incompressible fluid over a vacuum) are discussed. For a specific set of initial conditions, we give analytical evidence to suggest the instantaneous formation of one or more singularities at specific points in the unphysical plane, whose locations depend sensitively on small changes in initial conditions in the physical domain. One-half power singularities are created in accordance with an earlier conjecture; however, depending on initial conditions, other forms of singularities are also possible. For a specific initial condition, we follow a numerical procedure in the unphysical plane to compute the motion of a one-half singularity. This computation confirms our previous conjecture that the approach of a one-half singularity towards the physical domain corresponds to the development of a spike at the physical interface. Under some assumptions that appear to be consistent with numerical calculations, we present analytical evidence to suggest that a singularity of the one-half type cannot impinge the physical domain in finite time.
NASA Technical Reports Server (NTRS)
Clark, R. T.; Mccallister, R. D.
1982-01-01
The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.
A unitary convolution approximation for the impact-parameter dependent electronic energy loss
NASA Astrophysics Data System (ADS)
Schiwietz, G.; Grande, P. L.
1999-06-01
In this work, we propose a simple method to calculate the impact-parameter dependence of the electronic energy loss of bare ions for all impact parameters. This perturbative convolution approximation (PCA) is based on first-order perturbation theory, and thus, it is only valid for fast particles with low projectile charges. Using Bloch's stopping-power result and a simple scaling, we get rid of the restriction to low charge states and derive the unitary convolution approximation (UCA). Results of the UCA are then compared with full quantum-mechanical coupled-channel calculations for the impact-parameter dependent electronic energy loss.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.
1976-01-01
The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Lipes, R.; Reed, I. S.; Wu, C.
1980-01-01
A fast algorithm is developed to compute two dimensional convolutions of an array of d sub 1 X d sub 2 complex number points, where d sub 2 = 2(M) and d sub 1 = 2(m-r+) for some 1 or = r or = m. This algorithm requires fewer multiplications and about the same number of additions as the conventional fast fourier transform method for computing the two dimensional convolution. It also has the advantage that the operation of transposing the matrix of data can be avoided.
Cascaded K-means convolutional feature learner and its application to face recognition
NASA Astrophysics Data System (ADS)
Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu
2017-09-01
Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.
A convolutional neural network to filter artifacts in spectroscopic MRI.
Gurbani, Saumya S; Schreibmann, Eduard; Maudsley, Andrew A; Cordova, James Scott; Soher, Brian J; Poptani, Harish; Verma, Gaurav; Barker, Peter B; Shim, Hyunsuk; Cooper, Lee A D
2018-03-09
Proton MRSI is a noninvasive modality capable of generating volumetric maps of in vivo tissue metabolism without the need for ionizing radiation or injected contrast agent. Magnetic resonance spectroscopic imaging has been shown to be a viable imaging modality for studying several neuropathologies. However, a key hurdle in the routine clinical adoption of MRSI is the presence of spectral artifacts that can arise from a number of sources, possibly leading to false information. A deep learning model was developed that was capable of identifying and filtering out poor quality spectra. The core of the model used a tiled convolutional neural network that analyzed frequency-domain spectra to detect artifacts. When compared with a panel of MRS experts, our convolutional neural network achieved high sensitivity and specificity with an area under the curve of 0.95. A visualization scheme was implemented to better understand how the convolutional neural network made its judgement on single-voxel or multivoxel MRSI, and the convolutional neural network was embedded into a pipeline capable of producing whole-brain spectroscopic MRI volumes in real time. The fully automated method for assessment of spectral quality provides a valuable tool to support clinical MRSI or spectroscopic MRI studies for use in fields such as adaptive radiation therapy planning. © 2018 International Society for Magnetic Resonance in Medicine.
Baczewski, Andrew David; Vikram, Melapudi; Shanker, Balasubramaniam; ...
2010-08-27
Diffusion, lossy wave, and Klein–Gordon equations find numerous applications in practical problems across a range of diverse disciplines. The temporal dependence of all three Green’s functions are characterized by an infinite tail. This implies that the cost complexity of the spatio-temporal convolutions, associated with evaluating the potentials, scales as O(N s 2N t 2), where N s and N t are the number of spatial and temporal degrees of freedom, respectively. In this paper, we discuss two new methods to rapidly evaluate these spatio-temporal convolutions by exploiting their block-Toeplitz nature within the framework of accelerated Cartesian expansions (ACE). The firstmore » scheme identifies a convolution relation in time amongst ACE harmonics and the fast Fourier transform (FFT) is used for efficient evaluation of these convolutions. The second method exploits the rank deficiency of the ACE translation operators with respect to time and develops a recursive numerical compression scheme for the efficient representation and evaluation of temporal convolutions. It is shown that the cost of both methods scales as O(N sN tlog 2N t). Furthermore, several numerical results are presented for the diffusion equation to validate the accuracy and efficacy of the fast algorithms developed here.« less
{lambda} elements for one-dimensional singular problems with known strength of singularity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, K.K.; Surana, K.S.
1996-10-01
This paper presents a new and general procedure for designing special elements called {lambda} elements for one dimensional singular problems where the strength of the singularity is know. The {lambda} elements presented here are of type C{sup 0}. These elements also provide inter-element C{sup 0} continuity with p-version elements. The {lambda} elements do not require a precise knowledge of the extent of singular zone, i.e., their use may be extended beyond the singular zone. When {lambda} elements are used at the singularity, a singular problem behaves like a smooth problem thereby eliminating the need for h, p-adaptive processes all together.more » One dimensional steady state radial flow of an upper convected Maxwell fluid is considered as a sample problem. Least squares approach (or least squares finite element formulation: LSFEF) is used to construct the integral form (error functional I) from the differential equations. Numerical results presented for radially inward flow with inner radius r{sub i} = 0.1, 0.01, 0.001, 0.0001, 0.00001, and Deborah number of 2 (De = 2) demonstrate the accuracy, faster convergence of the iterative solution procedure, faster convergence rate of the error functional and mesh independent characteristics of the {lambda} elements regardless of the severity of the singularity.« less
Tangled nonlinear driven chain reactions of all optical singularities
NASA Astrophysics Data System (ADS)
Vasil'ev, V. I.; Soskin, M. S.
2012-03-01
Dynamics of polarization optical singularities chain reactions in generic elliptically polarized speckle fields created in photorefractive crystal LiNbO3 was investigated in details Induced speckle field develops in the tens of minutes scale due to photorefractive 'optical damage effect' induced by incident beam of He-Ne laser. It was shown that polarization singularities develop through topological chain reactions of developing speckle fields driven by photorefractive nonlinearities induced by incident laser beam. All optical singularities (C points, optical vortices, optical diabolos,) are defined by instantaneous topological structure of the output wavefront and are tangled by singular optics lows. Therefore, they have develop in tangled way by six topological chain reactions driven by nonlinear processes in used nonlinear medium (photorefractive LiNbO3:Fe in our case): C-points and optical diabolos for right (left) polarized components domains with orthogonally left (right) polarized optical vortices underlying them. All elements of chain reactions consist from loop and chain links when nucleated singularities annihilated directly or with alien singularities in 1:9 ratio. The topological reason of statistics was established by low probability of far enough separation of born singularities pair from existing neighbor singularities during loop trajectories. Topology of developing speckle field was measured and analyzed by dynamic stokes polarimetry with few seconds' resolution. The hierarchy of singularities govern scenario of tangled chain reactions was defined. The useful space-time data about peculiarities of optical damage evolution were obtained from existence and parameters of 'islands of stability' in developing speckle fields.
Metric dimensional reduction at singularities with implications to Quantum Gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoica, Ovidiu Cristinel, E-mail: holotronix@gmail.com
2014-08-15
A series of old and recent theoretical observations suggests that the quantization of gravity would be feasible, and some problems of Quantum Field Theory would go away if, somehow, the spacetime would undergo a dimensional reduction at high energy scales. But an identification of the deep mechanism causing this dimensional reduction would still be desirable. The main contribution of this article is to show that dimensional reduction effects are due to General Relativity at singularities, and do not need to be postulated ad-hoc. Recent advances in understanding the geometry of singularities do not require modification of General Relativity, being justmore » non-singular extensions of its mathematics to the limit cases. They turn out to work fine for some known types of cosmological singularities (black holes and FLRW Big-Bang), allowing a choice of the fundamental geometric invariants and physical quantities which remain regular. The resulting equations are equivalent to the standard ones outside the singularities. One consequence of this mathematical approach to the singularities in General Relativity is a special, (geo)metric type of dimensional reduction: at singularities, the metric tensor becomes degenerate in certain spacetime directions, and some properties of the fields become independent of those directions. Effectively, it is like one or more dimensions of spacetime just vanish at singularities. This suggests that it is worth exploring the possibility that the geometry of singularities leads naturally to the spontaneous dimensional reduction needed by Quantum Gravity. - Highlights: • The singularities we introduce are described by finite geometric/physical objects. • Our singularities are accompanied by dimensional reduction effects. • They affect the metric, the measure, the topology, the gravitational DOF (Weyl = 0). • Effects proposed in other approaches to Quantum Gravity are obtained naturally. • The geometric dimensional reduction obtained opens new ways for Quantum Gravity.« less
Enhanced line integral convolution with flow feature detection
DOT National Transportation Integrated Search
1995-01-01
Prepared ca. 1995. The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain [Cabral & Leedom '93]. The method produces a flow texture imag...
Spectral singularities and Bragg scattering in complex crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Longhi, S.
2010-02-15
Spectral singularities that spoil the completeness of Bloch-Floquet states may occur in non-Hermitian Hamiltonians with complex periodic potentials. Here an equivalence is established between spectral singularities in complex crystals and secularities that arise in Bragg diffraction patterns. Signatures of spectral singularities in a scattering process with wave packets are elucidated for a PT-symmetric complex crystal.
On the splash and splat singularities for the one-phase inhomogeneous Muskat Problem
NASA Astrophysics Data System (ADS)
Córdoba, Diego; Pernas-Castaño, Tania
2017-10-01
In this paper, we study finite time splash and splat singularities formation for the interface of one fluid in a porous media with two different permeabilities. We prove that the smoothness of the interface breaks down in finite time into a splash singularity but this is not going to happen into a splat singularity.
Classical stability of sudden and big rip singularities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrow, John D.; Lip, Sean Z. W.
2009-08-15
We introduce a general characterization of sudden cosmological singularities and investigate the classical stability of homogeneous and isotropic cosmological solutions of all curvatures containing these singularities to small scalar, vector, and tensor perturbations using gauge-invariant perturbation theory. We establish that sudden singularities at which the scale factor, expansion rate, and density are finite are stable except for a set of special parameter values. We also apply our analysis to the stability of Big Rip singularities and find the conditions for their stability against small scalar, vector, and tensor perturbations.
Singularity embedding method in potential flow calculations
NASA Technical Reports Server (NTRS)
Jou, W. H.; Huynh, H.
1982-01-01
The so-called H-type mesh is used in a finite-element (or finite-volume) calculation of the potential flow past an airfoil. Due to coordinate singularity at the leading edge, a special singular trial function is used for the elements neighboring the leading edge. The results using the special singular elements are compared to those using the regular elements. It is found that the unreasonable pressure distribution obtained by the latter is removed by the embedding of the singular element. Suggestions to extend the present method to transonic cases are given.
Naked singularities are not singular in distorted gravity
NASA Astrophysics Data System (ADS)
Garattini, Remo; Majumder, Barun
2014-07-01
We compute the Zero Point Energy (ZPE) induced by a naked singularity with the help of a reformulation of the Wheele-DeWitt equation. A variational approach is used for the calculation with Gaussian Trial Wave Functionals. The one loop contribution of the graviton to the ZPE is extracted keeping under control the UltraViolet divergences by means of a distorted gravitational field. Two examples of distortion are taken under consideration: Gravity's Rainbow and Noncommutative Geometry. Surprisingly, we find that the ZPE is no more singular when we approach the singularity.
Null cosmological singularities and free strings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayan, K.
2010-03-15
We continue exploring free strings in the background of null Kasner-like cosmological singularities, following K. Narayan, arXiv:0904.4532. We study the free string Schrodinger wave functional along the lines of K. Narayan, arXiv:0807.1517. We find the wave functional to be nonsingular in the vicinity of singularities whose Kasner exponents satisfy certain relations. We compare this with the description in other variables. We then study certain regulated versions of these singularities where the singular region is replaced by a substringy but nonsingular region and study the string spectra in these backgrounds. The string modes can again be solved for exactly, giving somemore » insight into how string oscillator states get excited near the singularity.« less
Probing the degenerate states of V-point singularities.
Ram, B S Bhargava; Sharma, Anurag; Senthilkumaran, Paramasivam
2017-09-15
V-points are polarization singularities in spatially varying linearly polarized optical fields and are characterized by the Poincare-Hopf index η. Each V-point singularity is a superposition of two oppositely signed orbital angular momentum states in two orthogonal spin angular momentum states. Hence, a V-point singularity has zero net angular momentum. V-points with given |η| have the same (amplitude) intensity distribution but have four degenerate polarization distributions. Each of these four degenerate states also produce identical diffraction patterns. Hence to distinguish these degenerate states experimentally, we present in this Letter a method involving a combination of polarization transformation and diffraction. This method also shows the possibility of using polarization singularities in place of phase singularities in optical communication and quantum information processing.
NASA Astrophysics Data System (ADS)
Liu, Pusheng; Lü, Baida
2007-04-01
By using the vectorial Debye diffraction theory, phase singularities of high numerical aperture (NA) dark-hollow Gaussian beams in the focal region are studied. The dependence of phase singularities on the truncation parameter δ and semi-aperture angle α (or equally, NA) is illustrated numerically. A comparison of phase singularities of high NA dark-hollow Gaussian beams with those of scalar paraxial Gaussian beams and high NA Gaussian beams is made. For high NA dark-hollow Gaussian beams the beam order n additionally affects the spatial distribution of phase singularities, and there exist phase singularities outside the focal plane, which may be created or annihilated by variation of the semi-aperture angle in a certain region.
Singularity: Scientific containers for mobility of compute.
Kurtzer, Gregory M; Sochat, Vanessa; Bauer, Michael W
2017-01-01
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.
Singularity: Scientific containers for mobility of compute
Kurtzer, Gregory M.; Bauer, Michael W.
2017-01-01
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science. PMID:28494014
Managing focal fields of vector beams with multiple polarization singularities.
Han, Lei; Liu, Sheng; Li, Peng; Zhang, Yi; Cheng, Huachao; Gan, Xuetao; Zhao, Jianlin
2016-11-10
We explore the tight focusing behavior of vector beams with multiple polarization singularities, and analyze the influences of the number, position, and topological charge of the singularities on the focal fields. It is found that the ellipticity of the local polarization states at the focal plane could be determined by the spatial distribution of the polarization singularities of the vector beam. When the spatial location and topological charge of singularities have even-fold rotation symmetry, the transverse fields at the focal plane are locally linearly polarized. Otherwise, the polarization state becomes a locally hybrid one. By appropriately arranging the distribution of the polarization singularities in the vector beam, the polarization distributions of the focal fields could be altered while the intensity maintains unchanged.
Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; LeCun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2012-01-01
Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons. PMID:22518097
Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; Lecun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2012-01-01
Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons.
The decoding of majority-multiplexed signals by means of dyadic convolution
NASA Astrophysics Data System (ADS)
Losev, V. V.
1980-09-01
The maximum likelihood method can often not be used for the decoding of majority-multiplexed signals because of the large number of computations required. This paper describes a fast dyadic convolution transform which can be used to reduce the number of computations.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.
2014-01-01
This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.
[Application of numerical convolution in in vivo/in vitro correlation research].
Yue, Peng
2009-01-01
This paper introduced the conception and principle of in vivo/in vitro correlation (IVIVC) and convolution/deconvolution methods, and elucidated in details the convolution strategy and method for calculating the in vivo absorption performance of the pharmaceutics according to the their pharmacokinetic data in Excel, then put the results forward to IVIVC research. Firstly, the pharmacokinetic data ware fitted by mathematical software to make up the lost points. Secondly, the parameters of the optimal fitted input function were defined by trail-and-error method according to the convolution principle in Excel under the hypothesis that all the input functions fit the Weibull functions. Finally, the IVIVC between in vivo input function and the in vitro dissolution was studied. In the examples, not only the application of this method was demonstrated in details but also its simplicity and effectiveness were proved by comparing with the compartment model method and deconvolution method. It showed to be a powerful tool for IVIVC research.
DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations.
Kruthiventi, Srinivas S S; Ayush, Kumar; Babu, R Venkatesh
2017-09-01
Understanding and predicting the human visual attention mechanism is an active area of research in the fields of neuroscience and computer vision. In this paper, we propose DeepFix, a fully convolutional neural network, which models the bottom-up mechanism of visual attention via saliency prediction. Unlike classical works, which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts the saliency map in an end-to-end manner. DeepFix is designed to capture semantics at multiple scales while taking global context into account, by using network layers with very large receptive fields. Generally, fully convolutional nets are spatially invariant-this prevents them from modeling location-dependent patterns (e.g., centre-bias). Our network handles this by incorporating a novel location-biased convolutional layer. We evaluate our model on multiple challenging saliency data sets and show that it achieves the state-of-the-art results.
Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network.
Yoon, Jaehong; Lee, Jungnyun; Whang, Mincheol
2018-01-01
Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain-computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.
Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network
2018-01-01
Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain–computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.
NASA Astrophysics Data System (ADS)
Liu, Miaofeng
2017-07-01
In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously
Convolutional encoding of self-dual codes
NASA Technical Reports Server (NTRS)
Solomon, G.
1994-01-01
There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.
Overcoming Robot-Arm Joint Singularities
NASA Technical Reports Server (NTRS)
Barker, L. K.; Houck, J. A.
1986-01-01
Kinematic equations allow arm to pass smoothly through singular region. Report discusses mathematical singularities in equations of robotarm control. Operator commands robot arm to move in direction relative to its own axis system by specifying velocity in that direction. Velocity command then resolved into individual-joint rotational velocities in robot arm to effect motion. However, usual resolved-rate equations become singular when robot arm is straightened.
7 CFR 900.80 - Words in the singular form.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Words in the singular form. 900.80 Section 900.80....C. 608b(b) and 7 U.S.C. 608e Covering Fruits, Vegetables, and Nuts § 900.80 Words in the singular form. Words in this subpart in the singular form shall be deemed to import the plural, and vice versa...
NASA Technical Reports Server (NTRS)
Wang, S. S.; Choi, I.
1983-01-01
The fundamental mechanics of delamination in fiber composite laminates is studied. Mathematical formulation of the problem is based on laminate anisotropic elasticity theory and interlaminar fracture mechanics concepts. Stress singularities and complete solution structures associated with general composite delaminations are determined. For a fully open delamination with traction-free surfaces, oscillatory stress singularities always appear, leading to physically inadmissible field solutions. A refined model is introduced by considering a partially closed delamination with crack surfaces in finite-length contact. Stress singularities associated with a partially closed delamination having frictional crack-surface contact are determined, and are found to be diferent from the inverse square-root one of the frictionless-contact case. In the case of a delamination with very small area of crack closure, a simplified model having a square-root stress singularity is employed by taking the limit of the partially closed delamination. The possible presence of logarithmic-type stress singularity is examined; no logarithmic singularity of any kind is found in the composite delamination problem. Numerical examples of dominant stress singularities are shown for delaminations having crack-tip closure with different frictional coefficients between general (1) and (2) graphite-epoxy composites.
Singular trajectories: space-time domain topology of developing speckle fields
NASA Astrophysics Data System (ADS)
Vasil'ev, Vasiliy; Soskin, Marat S.
2010-02-01
It is shown the space-time dynamics of optical singularities is fully described by singularities trajectories in space-time domain, or evolution of transverse coordinates(x, y) in some fixed plane z0. The dynamics of generic developing speckle fields was realized experimentally by laser induced scattering in LiNbO3:Fe photorefractive crystal. The space-time trajectories of singularities can be divided topologically on two classes with essentially different scenario and duration. Some of them (direct topological reactions) consist from nucleation of singularities pair at some (x, y, z0, t) point, their movement and annihilation. They possess form of closed loops with relatively short time of existence. Another much more probable class of trajectories are chain topological reactions. Each of them consists from sequence of links, i.e. of singularities nucleation in various points (xi yi, ti) and following annihilation of both singularities in other space-time points with alien singularities of opposite topological indices. Their topology and properties are established. Chain topological reactions can stop on the borders of a developing speckle field or go to infinity. Examples of measured both types of topological reactions for optical vortices (polarization C points) in scalar (elliptically polarized) natural developing speckle fields are presented.
New classification methods on singularity of mechanism
NASA Astrophysics Data System (ADS)
Luo, Jianguo; Han, Jianyou
2010-07-01
Based on the analysis of base and methods of singularity of mechanism, four methods obtained according to the factors of moving states of mechanism and cause of singularity and property of linear complex of singularity and methods in studying singularity, these bases and methods can't reflect the direct property and systematic property and controllable property of the structure of mechanism in macro, thus can't play an excellent role in guiding to evade the configuration before the appearance of singularity. In view of the shortcomings of forementioned four bases and methods, six new methods combined with the structure and exterior phenomena and motion control of mechanism directly and closely, classfication carried out based on the factors of moving base and joint component and executor and branch and acutating source and input parameters, these factors display the systemic property in macro, excellent guiding performance can be expected in singularity evasion and machine design and machine control based on these new bases and methods.
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth Sarkis
1987-01-01
The correspondence between robotic manipulators and single gimbal Control Moment Gyro (CMG) systems was exploited to aid in the understanding and design of single gimbal CMG Steering laws. A test for null motion near a singular CMG configuration was derived which is able to distinguish between escapable and unescapable singular states. Detailed analysis of the Jacobian matrix null-space was performed and results were used to develop and test a variety of single gimbal CMG steering laws. Computer simulations showed that all existing singularity avoidance methods are unable to avoid Elliptic internal singularities. A new null motion algorithm using the Moore-Penrose pseudoinverse, however, was shown by simulation to avoid Elliptic type singularities under certain conditions. The SR-inverse, with appropriate null motion was proposed as a general approach to singularity avoidance, because of its ability to avoid singularities through limited introduction of torque error. Simulation results confirmed the superior performance of this method compared to the other available and proposed pseudoinverse-based Steering laws.
Infinite derivative gravity: non-singular cosmology & blackhole solutions
NASA Astrophysics Data System (ADS)
Mazumdar, A.
Both Einstein’s theory of General Relativity and Newton’s theory of gravity possess a short distance and small time scale catastrophe. The blackhole singularity and cosmological Big Bang singularity problems highlight that current theories of gravity are incomplete description at early times and small distances. I will discuss how one can potentially resolve these fundamental problems at a classical level and quantum level. In particular, I will discuss infinite derivative theories of gravity, where gravitational interactions become weaker in the ultraviolet, and therefore resolving some of the classical singularities, such as Big Bang and Schwarzschild singularity for compact non-singular objects with mass up to 1025 grams. In this lecture, I will discuss quantum aspects of infinite derivative gravity and discuss few aspects which can make the theory asymptotically free in the UV.
Three dimensional canonical singularity and five dimensional N = 1 SCFT
NASA Astrophysics Data System (ADS)
Xie, Dan; Yau, Shing-Tung
2017-06-01
We conjecture that every three dimensional canonical singularity defines a five dimensional N = 1 SCFT. Flavor symmetry can be found from singularity structure: non-abelian flavor symmetry is read from the singularity type over one dimensional singular locus. The dimension of Coulomb branch is given by the number of compact crepant divisors from a crepant resolution of singularity. The detailed structure of Coulomb branch is described as follows: a) a chamber of Coulomb branch is described by a crepant resolution, and this chamber is given by its Nef cone and the prepotential is computed from triple intersection numbers; b) Crepant resolution is not unique and different resolutions are related by flops; Nef cones from crepant resolutions form a fan which is claimed to be the full Coulomb branch.