Sample records for entropy-constrained vector quantization

  1. High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen

    1995-01-01

    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.

  2. Conditional Entropy-Constrained Residual VQ with Application to Image Coding

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1996-01-01

    This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.

  3. Necessary conditions for the optimality of variable rate residual vector quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.

  4. Image coding using entropy-constrained residual vector quantization

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.

  5. Visual data mining for quantized spatial data

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Kahn, Brian

    2004-01-01

    In previous papers we've shown how a well known data compression algorithm called Entropy-constrained Vector Quantization ( can be modified to reduce the size and complexity of very large, satellite data sets. In this paper, we descuss how to visualize and understand the content of such reduced data sets.

  6. Magnetic resonance image compression using scalar-vector quantization

    NASA Astrophysics Data System (ADS)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  7. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  8. Understanding Local Structure Globally in Earth Science Remote Sensing Data Sets

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Fetzer, Eric

    2007-01-01

    Empirical probability distributions derived from the data are the signatures of physical processes generating the data. Distributions defined on different space-time windows can be compared and differences or changes can be attributed to physical processes. This presentation discusses on ways to reduce remote sensing data in a way that preserves information, focusing on the rate-distortion theory and using the entropy-constrained vector quantization algorithm.

  9. Robust vector quantization for noisy channels

    NASA Technical Reports Server (NTRS)

    Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.

    1988-01-01

    The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.

  10. Cross-entropy embedding of high-dimensional data using the neural gas model.

    PubMed

    Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi

    2005-01-01

    A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).

  11. Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Meadows, Steven

    1997-10-01

    Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.

  12. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  13. A constrained joint source/channel coder design and vector quantization of nonstationary sources

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.

    1993-01-01

    The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.

  14. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  15. Particle swarm optimization-based local entropy weighted histogram equalization for infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian; Maldague, Xavier

    2018-06-01

    Infrared image enhancement plays a significant role in intelligent urban surveillance systems for smart city applications. Unlike existing methods only exaggerating the global contrast, we propose a particle swam optimization-based local entropy weighted histogram equalization which involves the enhancement of both local details and fore-and background contrast. First of all, a novel local entropy weighted histogram depicting the distribution of detail information is calculated based on a modified hyperbolic tangent function. Then, the histogram is divided into two parts via a threshold maximizing the inter-class variance in order to improve the contrasts of foreground and background, respectively. To avoid over-enhancement and noise amplification, double plateau thresholds of the presented histogram are formulated by means of particle swarm optimization algorithm. Lastly, each sub-image is equalized independently according to the constrained sub-local entropy weighted histogram. Comparative experiments implemented on real infrared images prove that our algorithm outperforms other state-of-the-art methods in terms of both visual and quantized evaluations.

  16. Gravitational surface Hamiltonian and entropy quantization

    NASA Astrophysics Data System (ADS)

    Bakshi, Ashish; Majhi, Bibhas Ranjan; Samanta, Saurav

    2017-02-01

    The surface Hamiltonian corresponding to the surface part of a gravitational action has xp structure where p is conjugate momentum of x. Moreover, it leads to TS on the horizon of a black hole. Here T and S are temperature and entropy of the horizon. Imposing the hermiticity condition we quantize this Hamiltonian. This leads to an equidistant spectrum of its eigenvalues. Using this we show that the entropy of the horizon is quantized. This analysis holds for any order of Lanczos-Lovelock gravity. For general relativity, the area spectrum is consistent with Bekenstein's observation. This provides a more robust confirmation of this earlier result as the calculation is based on the direct quantization of the Hamiltonian in the sense of usual quantum mechanics.

  17. Reducing the Volume of NASA Earth-Science Data

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Braverman, Amy J.; Guillaume, Alexandre

    2010-01-01

    A computer program reduces data generated by NASA Earth-science missions into representative clusters characterized by centroids and membership information, thereby reducing the large volume of data to a level more amenable to analysis. The program effects an autonomous data-reduction/clustering process to produce a representative distribution and joint relationships of the data, without assuming a specific type of distribution and relationship and without resorting to domain-specific knowledge about the data. The program implements a combination of a data-reduction algorithm known as the entropy-constrained vector quantization (ECVQ) and an optimization algorithm known as the differential evolution (DE). The combination of algorithms generates the Pareto front of clustering solutions that presents the compromise between the quality of the reduced data and the degree of reduction. Similar prior data-reduction computer programs utilize only a clustering algorithm, the parameters of which are tuned manually by users. In the present program, autonomous optimization of the parameters by means of the DE supplants the manual tuning of the parameters. Thus, the program determines the best set of clustering solutions without human intervention.

  18. Entropy-variation with resistance in a quantized RLC circuit derived by the generalized Hellmann-Feynman theorem

    NASA Astrophysics Data System (ADS)

    Fan, Hong-Yi; Xu, Xue-Xiang; Hu, Li-Yun

    2010-06-01

    By virtue of the generalized Hellmann-Feynman theorem for the ensemble average, we obtain the internal energy and average energy consumed by the resistance R in a quantized resistance-inductance-capacitance (RLC) electric circuit. We also calculate the entropy-variation with R. The relation between entropy and R is also derived. By the use of figures we indeed see that the entropy increases with the increment of R.

  19. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  20. An adaptive vector quantization scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1990-01-01

    Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.

  1. Minimal entropy probability paths between genome families.

    PubMed

    Ahlbrandt, Calvin; Benson, Gary; Casey, William

    2004-05-01

    We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.

  2. Using entropy measures to characterize human locomotion.

    PubMed

    Leverick, Graham; Szturm, Tony; Wu, Christine Q

    2014-12-01

    Entropy measures have been widely used to quantify the complexity of theoretical and experimental dynamical systems. In this paper, the value of using entropy measures to characterize human locomotion is demonstrated based on their construct validity, predictive validity in a simple model of human walking and convergent validity in an experimental study. Results show that four of the five considered entropy measures increase meaningfully with the increased probability of falling in a simple passive bipedal walker model. The same four entropy measures also experienced statistically significant increases in response to increasing age and gait impairment caused by cognitive interference in an experimental study. Of the considered entropy measures, the proposed quantized dynamical entropy (QDE) and quantization-based approximation of sample entropy (QASE) offered the best combination of sensitivity to changes in gait dynamics and computational efficiency. Based on these results, entropy appears to be a viable candidate for assessing the stability of human locomotion.

  3. A recursive technique for adaptive vector quantization

    NASA Technical Reports Server (NTRS)

    Lindsay, Robert A.

    1989-01-01

    Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.

  4. Medical Image Compression Using a New Subband Coding Method

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug

    1995-01-01

    A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.

  5. Competitive learning with pairwise constraints.

    PubMed

    Covões, Thiago F; Hruschka, Eduardo R; Ghosh, Joydeep

    2013-01-01

    Constrained clustering has been an active research topic since the last decade. Most studies focus on batch-mode algorithms. This brief introduces two algorithms for on-line constrained learning, named on-line linear constrained vector quantization error (O-LCVQE) and constrained rival penalized competitive learning (C-RPCL). The former is a variant of the LCVQE algorithm for on-line settings, whereas the latter is an adaptation of the (on-line) RPCL algorithm to deal with constrained clustering. The accuracy results--in terms of the normalized mutual information (NMI)--from experiments with nine datasets show that the partitions induced by O-LCVQE are competitive with those found by the (batch-mode) LCVQE. Compared with this formidable baseline algorithm, it is surprising that C-RPCL can provide better partitions (in terms of the NMI) for most of the datasets. Also, experiments on a large dataset show that on-line algorithms for constrained clustering can significantly reduce the computational time.

  6. Becchi-Rouet-Stora-Tyutin formalism and zero locus reduction

    NASA Astrophysics Data System (ADS)

    Grigoriev, M. A.; Semikhatov, A. M.; Tipunin, I. Yu.

    2001-08-01

    In the Becchi-Rouet-Stora-Tyutin (BRST) quantization of gauge theories, the zero locus ZQ of the BRST differential Q carries an (anti)bracket whose parity is opposite to that of the fundamental bracket. Observables of the BRST theory are in a 1:1 correspondence with Casimir functions of the bracket on ZQ. For any constrained dynamical system with the phase space N0 and the constraint surface Σ, we prove its equivalence to the constrained system on the BFV-extended phase space with the constraint surface given by ZQ. Reduction to the zero locus of the differential gives rise to relations between bracket operations and differentials arising in different complexes (the Gerstenhaber, Schouten, Berezin-Kirillov, and Sklyanin brackets); the equation ensuring the existence of a nilpotent vector field on the reduced manifold can be the classical Yang-Baxter equation. We also generalize our constructions to the bi-QP manifolds which from the BRST theory viewpoint correspond to the BRST-anti-BRST-symmetric quantization.

  7. Dynamics of entropy and nonclassical properties of the state of a Λ-type three-level atom interacting with a single-mode cavity field with intensity-dependent coupling in a Kerr medium

    NASA Astrophysics Data System (ADS)

    Faghihi, M. J.; Tavassoly, M. K.

    2012-02-01

    In this paper, we study the interaction between a three-level atom and a quantized single-mode field with ‘intensity-dependent coupling’ in a ‘Kerr medium’. The three-level atom is considered to be in a Λ-type configuration. Under particular initial conditions, which may be prepared for the atom and the field, the dynamical state vector of the entire system will be explicitly obtained, for the arbitrary nonlinearity function f(n) associated with any physical system. Then, after evaluating the variation of the field entropy against time, we will investigate the quantum statistics as well as some of the nonclassical properties of the introduced state. During our calculations we investigate the effects of intensity-dependent coupling, Kerr medium and detuning parameters on the depth and domain of the nonclassicality features of the atom-field state vector. Finally, we compare our obtained results with those of V-type three-level atoms.

  8. Segmentation of magnetic resonance images using fuzzy algorithms for learning vector quantization.

    PubMed

    Karayiannis, N B; Pai, P I

    1999-02-01

    This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities.

  9. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  10. Quasinormal modes and quantization of area/entropy for noncommutative BTZ black hole

    NASA Astrophysics Data System (ADS)

    Huang, Lu; Chen, Juhua; Wang, Yongjiu

    2018-04-01

    We investigate the quasinormal modes and area/entropy spectrum for the noncommutative BTZ black hole. The exact expressions for QNM frequencies are presented by expanding the noncommutative parameter in horizon radius. We find that the noncommutativity does not affect conformal weights (hL, hR), but it influences the thermal equilibrium. The intuitive expressions of the area/entropy spectrum are calculated in terms of Bohr-Sommerfeld quantization, and our results show that the noncommutativity leads to a nonuniform area/entropy spectrum. We also find that the coupling constant ξ , which is the coupling between the scalar and the gravitational fields, shifts the QNM frequencies but not influences the structure of area/entorpy spectrum.

  11. Image Coding Based on Address Vector Quantization.

    NASA Astrophysics Data System (ADS)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.

  12. Permutation entropy with vector embedding delays

    NASA Astrophysics Data System (ADS)

    Little, Douglas J.; Kane, Deb M.

    2017-12-01

    Permutation entropy (PE) is a statistic used widely for the detection of structure within a time series. Embedding delay times at which the PE is reduced are characteristic timescales for which such structure exists. Here, a generalized scheme is investigated where embedding delays are represented by vectors rather than scalars, permitting PE to be calculated over a (D -1 ) -dimensional space, where D is the embedding dimension. This scheme is applied to numerically generated noise, sine wave and logistic map series, and experimental data sets taken from a vertical-cavity surface emitting laser exhibiting temporally localized pulse structures within the round-trip time of the laser cavity. Results are visualized as PE maps as a function of embedding delay, with low PE values indicating combinations of embedding delays where correlation structure is present. It is demonstrated that vector embedding delays enable identification of structure that is ambiguous or masked, when the embedding delay is constrained to scalar form.

  13. Quantization Distortion in Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  14. Vector quantization

    NASA Technical Reports Server (NTRS)

    Gray, Robert M.

    1989-01-01

    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.

  15. Anisotropic magnetocaloric response in AlFe 2B 2

    DOE PAGES

    Barua, R.; Lejeune, B. T.; Ke, L.; ...

    2018-02-19

    Experimental investigations of the magnetocaloric response of the intermetallic layered AlFe 2B 2 compound along the principle axes of the orthorhombic cell were carried out using aligned plate-like crystallites with an anisotropic [101] growth habit. Results were confirmed to be consistent with density functional theory calculations. Field-dependent magnetization data confirm that the a-axis is the easy direction of magnetization within the (ac) plane. The magnetocrystalline anisotropy energy required to rotate the spin quantization vector from the c-to the a-axis direction is determined as K~0.9 MJ/m 3 at 50 K. Magnetic entropy change curves measured near the Curie transition temperature ofmore » 285 K reveal a large rotating magnetic entropy change of 1.3 J kg -1K -1 at μ 0H app = 2 T, consistent with large differences in magnetic entropy change ΔS mag measured along the a- and c-axes. Overall, this study provides insight of both fundamental and applied relevance concerning pathways for maximizing the magnetocaloric potential of AlFe 2B 2 for thermal management applications.« less

  16. Anisotropic magnetocaloric response in AlFe 2B 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barua, R.; Lejeune, B. T.; Ke, L.

    Experimental investigations of the magnetocaloric response of the intermetallic layered AlFe 2B 2 compound along the principle axes of the orthorhombic cell were carried out using aligned plate-like crystallites with an anisotropic [101] growth habit. Results were confirmed to be consistent with density functional theory calculations. Field-dependent magnetization data confirm that the a-axis is the easy direction of magnetization within the (ac) plane. The magnetocrystalline anisotropy energy required to rotate the spin quantization vector from the c-to the a-axis direction is determined as K~0.9 MJ/m 3 at 50 K. Magnetic entropy change curves measured near the Curie transition temperature ofmore » 285 K reveal a large rotating magnetic entropy change of 1.3 J kg -1K -1 at μ 0H app = 2 T, consistent with large differences in magnetic entropy change ΔS mag measured along the a- and c-axes. Overall, this study provides insight of both fundamental and applied relevance concerning pathways for maximizing the magnetocaloric potential of AlFe 2B 2 for thermal management applications.« less

  17. A hybrid LBG/lattice vector quantizer for high quality image coding

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, V.; Sayood, K.; Arikan, E. (Editor)

    1991-01-01

    It is well known that a vector quantizer is an efficient coder offering a good trade-off between quantization distortion and bit rate. The performance of a vector quantizer asymptotically approaches the optimum bound with increasing dimensionality. A vector quantized image suffers from the following types of degradations: (1) edge regions in the coded image contain staircase effects, (2) quasi-constant or slowly varying regions suffer from contouring effects, and (3) textured regions lose details and suffer from granular noise. All three of these degradations are due to the finite size of the code book, the distortion measures used in the design, and due to the finite training procedure involved in the construction of the code book. In this paper, we present an adaptive technique which attempts to ameliorate the edge distortion and contouring effects.

  18. Quantization of Electromagnetic Fields in Cavities

    NASA Technical Reports Server (NTRS)

    Kakazu, Kiyotaka; Oshiro, Kazunori

    1996-01-01

    A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.

  19. Throat quantization of the Schwarzschild-Tangherlini(-AdS) black hole

    NASA Astrophysics Data System (ADS)

    Maeda, Hideki

    2018-01-01

    By the throat quantization pioneered by Louko and Mäkelä, we derive the mass and area/entropy spectra for the Schwarzschild-Tangherlini-type asymptotically flat or AdS vacuum black hole in arbitrary dimensions. Using the WKB approximation for black holes with large mass, we show that area/entropy is equally spaced for asymptotically flat black holes, while mass is equally spaced for asymptotically AdS black holes. Exact spectra can be obtained for toroidal AdS black holes in arbitrary dimensions including the three-dimensional BTZ black hole.

  20. A Algebraic Approach to the Quantization of Constrained Systems: Finite Dimensional Examples.

    NASA Astrophysics Data System (ADS)

    Tate, Ranjeet Shekhar

    1992-01-01

    General relativity has two features in particular, which make it difficult to apply to it existing schemes for the quantization of constrained systems. First, there is no background structure in the theory, which could be used, e.g., to regularize constraint operators, to identify a "time" or to define an inner product on physical states. Second, in the Ashtekar formulation of general relativity, which is a promising avenue to quantum gravity, the natural variables for quantization are not canonical; and, classically, there are algebraic identities between them. Existing schemes are usually not concerned with such identities. Thus, from the point of view of canonical quantum gravity, it has become imperative to find a framework for quantization which provides a general prescription to find the physical inner product, and is flexible enough to accommodate non -canonical variables. In this dissertation I present an algebraic formulation of the Dirac approach to the quantization of constrained systems. The Dirac quantization program is augmented by a general principle to find the inner product on physical states. Essentially, the Hermiticity conditions on physical operators determine this inner product. I also clarify the role in quantum theory of possible algebraic identities between the elementary variables. I use this approach to quantize various finite dimensional systems. Some of these models test the new aspects of the algebraic framework. Others bear qualitative similarities to general relativity, and may give some insight into the pitfalls lurking in quantum gravity. The previous quantizations of one such model had many surprising features. When this model is quantized using the algebraic program, there is no longer any unexpected behaviour. I also construct the complete quantum theory for a previously unsolved relativistic cosmology. All these models indicate that the algebraic formulation provides powerful new tools for quantization. In (spatially compact) general relativity, the Hamiltonian is constrained to vanish. I present various approaches one can take to obtain an interpretation of the quantum theory of such "dynamically constrained" systems. I apply some of these ideas to the Bianchi I cosmology, and analyze the issue of the initial singularity in quantum theory.

  1. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  2. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  3. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  4. A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization

    NASA Astrophysics Data System (ADS)

    Binz, Ernst; Pods, Sonja

    2006-01-01

    In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group HX∞. A representation of the C*-group algebra of HX∞ is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.

  5. On Correspondence of BRST-BFV, Dirac, and Refined Algebraic Quantizations of Constrained Systems

    NASA Astrophysics Data System (ADS)

    Shvedov, O. Yu.

    2002-11-01

    The correspondence between BRST-BFV, Dirac, and refined algebraic (group averaging, projection operator) approaches to quantizing constrained systems is analyzed. For the closed-algebra case, it is shown that the component of the BFV wave function corresponding to maximal (minimal) value of number of ghosts and antighosts in the Schrodinger representation may be viewed as a wave function in the refined algebraic (Dirac) quantization approach. The Giulini-Marolf group averaging formula for the inner product in the refined algebraic quantization approach is obtained from the Batalin-Marnelius prescription for the BRST-BFV inner product, which should be generally modified due to topological problems. The considered prescription for the correspondence of states is observed to be applicable to the open-algebra case. The refined algebraic quantization approach is generalized then to the case of nontrivial structure functions. A simple example is discussed. The correspondence of observables for different quantization methods is also investigated.

  6. Vector quantizer designs for joint compression and terrain categorization of multispectral imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Lyons, Daniel F.

    1994-01-01

    Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.

  7. Minimal-post-processing 320-Gbps true random bit generation using physical white chaos.

    PubMed

    Wang, Anbang; Wang, Longsheng; Li, Pu; Wang, Yuncai

    2017-02-20

    Chaotic external-cavity semiconductor laser (ECL) is a promising entropy source for generation of high-speed physical random bits or digital keys. The rate and randomness is unfortunately limited by laser relaxation oscillation and external-cavity resonance, and is usually improved by complicated post processing. Here, we propose using a physical broadband white chaos generated by optical heterodyning of two ECLs as entropy source to construct high-speed random bit generation (RBG) with minimal post processing. The optical heterodyne chaos not only has a white spectrum without signature of relaxation oscillation and external-cavity resonance but also has a symmetric amplitude distribution. Thus, after quantization with a multi-bit analog-digital-convertor (ADC), random bits can be obtained by extracting several least significant bits (LSBs) without any other processing. In experiments, a white chaos with a 3-dB bandwidth of 16.7 GHz is generated. Its entropy rate is estimated as 16 Gbps by single-bit quantization which means a spectrum efficiency of 96%. With quantization using an 8-bit ADC, 320-Gbps physical RBG is achieved by directly extracting 4 LSBs at 80-GHz sampling rate.

  8. Combining Vector Quantization and Histogram Equalization.

    ERIC Educational Resources Information Center

    Cosman, Pamela C.; And Others

    1992-01-01

    Discussion of contrast enhancement techniques focuses on the use of histogram equalization with a data compression technique, i.e., tree-structured vector quantization. The enhancement technique of intensity windowing is described, and the use of enhancement techniques for medical images is explained, including adaptive histogram equalization.…

  9. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  10. Perceptual compression of magnitude-detected synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Werness, Susan A.

    1994-01-01

    A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.

  11. The Population Inversion and the Entropy of a Moving Two-Level Atom in Interaction with a Quantized Field

    NASA Astrophysics Data System (ADS)

    Abo-Kahla, D. A. M.; Abdel-Aty, M.; Farouk, A.

    2018-05-01

    An atom with only two energy eigenvalues is described by a two-dimensional state space spanned by the two energy eigenstates is called a two-level atom. We consider the interaction between a two-level atom system with a constant velocity. An analytic solution of the systems which interacts with a quantized field is provided. Furthermore, the significant effect of the temperature on the atomic inversion, the purity and the information entropy are discussed in case of the initial state either an exited state or a maximally mixed state. Additionally, the effect of the half wavelengths number of the field-mode is investigated.

  12. Justification of Fuzzy Declustering Vector Quantization Modeling in Classification of Genotype-Image Phenotypes

    NASA Astrophysics Data System (ADS)

    Ng, Theam Foo; Pham, Tuan D.; Zhou, Xiaobo

    2010-01-01

    With the fast development of multi-dimensional data compression and pattern classification techniques, vector quantization (VQ) has become a system that allows large reduction of data storage and computational effort. One of the most recent VQ techniques that handle the poor estimation of vector centroids due to biased data from undersampling is to use fuzzy declustering-based vector quantization (FDVQ) technique. Therefore, in this paper, we are motivated to propose a justification of FDVQ based hidden Markov model (HMM) for investigating its effectiveness and efficiency in classification of genotype-image phenotypes. The performance evaluation and comparison of the recognition accuracy between a proposed FDVQ based HMM (FDVQ-HMM) and a well-known LBG (Linde, Buzo, Gray) vector quantization based HMM (LBG-HMM) will be carried out. The experimental results show that the performances of both FDVQ-HMM and LBG-HMM are almost similar. Finally, we have justified the competitiveness of FDVQ-HMM in classification of cellular phenotype image database by using hypotheses t-test. As a result, we have validated that the FDVQ algorithm is a robust and an efficient classification technique in the application of RNAi genome-wide screening image data.

  13. The Effects of Aging and Dual Tasking on Human Gait Complexity During Treadmill Walking: A Comparative Study Using Quantized Dynamical Entropy and Sample Entropy.

    PubMed

    Ahmadi, Samira; Wu, Christine; Sepehri, Nariman; Kantikar, Anuprita; Nankar, Mayur; Szturm, Tony

    2018-01-01

    Quantized dynamical entropy (QDE) has recently been proposed as a new measure to quantify the complexity of dynamical systems with the purpose of offering a better computational efficiency. This paper further investigates the viability of this method using five different human gait signals. These signals are recorded while normal walking and while performing secondary tasks among two age groups (young and older age groups). The results are compared with the outcomes of previously established sample entropy (SampEn) measure for the same signals. We also study how analyzing segmented and spatially and temporally normalized signal differs from analyzing whole data. Our findings show that human gait signals become more complex as people age and while they are cognitively loaded. Center of pressure (COP) displacement in mediolateral direction is the best signal for showing the gait changes. Moreover, the results suggest that by segmenting data, more information about intrastride dynamical features are obtained. Most importantly, QDE is shown to be a reliable measure for human gait complexity analysis.

  14. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design

    PubMed Central

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-01-01

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061

  15. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.

    PubMed

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-11-23

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.

  16. A short essay on quantum black holes and underlying noncommutative quantized space-time

    NASA Astrophysics Data System (ADS)

    Tanaka, Sho

    2017-01-01

    We emphasize the importance of noncommutative geometry or Lorenz-covariant quantized space-time towards the ultimate theory of quantum gravity and Planck scale physics. We focus our attention on the statistical and substantial understanding of the Bekenstein-Hawking area-entropy law of black holes in terms of the kinematical holographic relation (KHR). KHR manifestly holds in Yang’s quantized space-time as the result of kinematical reduction of spatial degrees of freedom caused by its own nature of noncommutative geometry, and plays an important role in our approach without any recourse to the familiar hypothesis, so-called holographic principle. In the present paper, we find a unified form of KHR applicable to the whole region ranging from macroscopic to microscopic scales in spatial dimension d  =  3. We notice a possibility of nontrivial modification of area-entropy law of black holes which becomes most remarkable in the extremely microscopic system close to Planck scale.

  17. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  18. Quantized Vector Potential and the Photon Wave-function

    NASA Astrophysics Data System (ADS)

    Meis, C.; Dahoo, P. R.

    2017-12-01

    The vector potential function {\\overrightarrow{α }}kλ (\\overrightarrow{r},t) for a k-mode and λ-polarization photon, with the quantized amplitude α 0k (ω k ) = ξω k , satisfies the classical wave propagation equation as well as the Schrodinger’s equation with the relativistic massless Hamiltonian \\mathop{H}\\limits∼ =-i\\hslash c\\overrightarrow{\

  19. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  20. Vector Quantization Algorithm Based on Associative Memories

    NASA Astrophysics Data System (ADS)

    Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo

    This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.

  1. A VLSI chip set for real time vector quantization of image sequences

    NASA Technical Reports Server (NTRS)

    Baker, Richard L.

    1989-01-01

    The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.

  2. Information efficiency in visual communication

    NASA Astrophysics Data System (ADS)

    Alter-Gartenberg, Rachel; Rahman, Zia-ur

    1993-08-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  3. Information efficiency in visual communication

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1993-01-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  4. Subband directional vector quantization in radiological image compression

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  5. Entropy Analysis of Kinetic Flux Vector Splitting Schemes for the Compressible Euler Equations

    NASA Technical Reports Server (NTRS)

    Shiuhong, Lui; Xu, Jun

    1999-01-01

    Flux Vector Splitting (FVS) scheme is one group of approximate Riemann solvers for the compressible Euler equations. In this paper, the discretized entropy condition of the Kinetic Flux Vector Splitting (KFVS) scheme based on the gas-kinetic theory is proved. The proof of the entropy condition involves the entropy definition difference between the distinguishable and indistinguishable particles.

  6. BSIFT: toward data-independent codebook for large scale image search.

    PubMed

    Zhou, Wengang; Li, Houqiang; Hong, Richang; Lu, Yijuan; Tian, Qi

    2015-03-01

    Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.

  7. Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.

    PubMed

    Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann

    2017-01-01

    Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.

  8. A conservation law, entropy principle and quantization of fractal dimensions in hadron interactions

    NASA Astrophysics Data System (ADS)

    Zborovský, I.

    2018-04-01

    Fractal self-similarity of hadron interactions demonstrated by the z-scaling of inclusive spectra is studied. The scaling regularity reflects fractal structure of the colliding hadrons (or nuclei) and takes into account general features of fragmentation processes expressed by fractal dimensions. The self-similarity variable z is a function of the momentum fractions x1 and x2 of the colliding objects carried by the interacting hadron constituents and depends on the momentum fractions ya and yb of the scattered and recoil constituents carried by the inclusive particle and its recoil counterpart, respectively. Based on entropy principle, new properties of the z-scaling concept are found. They are conservation of fractal cumulativity in hadron interactions and quantization of fractal dimensions characterizing hadron structure and fragmentation processes at a constituent level.

  9. Survey of adaptive image coding techniques

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1977-01-01

    The general problem of image data compression is discussed briefly with attention given to the use of Karhunen-Loeve transforms, suboptimal systems, and block quantization. A survey is then conducted encompassing the four categories of adaptive systems: (1) adaptive transform coding (adaptive sampling, adaptive quantization, etc.), (2) adaptive predictive coding (adaptive delta modulation, adaptive DPCM encoding, etc.), (3) adaptive cluster coding (blob algorithms and the multispectral cluster coding technique), and (4) adaptive entropy coding.

  10. Efficient storage and management of radiographic images using a novel wavelet-based multiscale vector quantizer

    NASA Astrophysics Data System (ADS)

    Yang, Shuyu; Mitra, Sunanda

    2002-05-01

    Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.

  11. Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

    PubMed

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  12. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  13. Symplectic Quantization of a Vector-Tensor Gauge Theory with Topological Coupling

    NASA Astrophysics Data System (ADS)

    Barcelos-Neto, J.; Silva, M. B. D.

    We use the symplectic formalism to quantize a gauge theory where vectors and tensors fields are coupled in a topological way. This is an example of reducible theory and a procedure like of ghosts-of-ghosts of the BFV method is applied but in terms of Lagrange multipliers. Our final results are in agreement with the ones found in the literature by using the Dirac method.

  14. Field-temperature phase diagrams of freestanding and substrate-constrained epitaxial Ni-Mn-Ga-Co films for magnetocaloric applications

    NASA Astrophysics Data System (ADS)

    Diestel, A.; Niemann, R.; Schleicher, B.; Schwabe, S.; Schultz, L.; Fähler, S.

    2015-07-01

    Ferroic cooling processes that rely on field-induced first-order transformations of solid materials are a promising step towards a more energy-efficient refrigeration technology. In particular, thin films are discussed for their fast heat transfer and possible applications in microsystems. Substrate-constrained films are not useful since their substrates act as a heat sink. In this article, we examine a substrate-constrained and a freestanding epitaxial film of magnetocaloric Ni-Mn-Ga-Co. We compare phase diagrams and entropy changes obtained by magnetic field and temperature scans, which differ. We observe an asymmetry of the hysteresis between heating and cooling branch, which vanishes at high magnetic fields. These effects are discussed with respect to the vector character of a magnetic field, which acts differently on the nucleation and growth processes compared to the scalar character of the temperature.

  15. Harvesting Entropy for Random Number Generation for Internet of Things Constrained Devices Using On-Board Sensors

    PubMed Central

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-01-01

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357

  16. Harvesting entropy for random number generation for internet of things constrained devices using on-board sensors.

    PubMed

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-10-22

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.

  17. Inability of the entropy vector method to certify nonclassicality in linelike causal structures

    NASA Astrophysics Data System (ADS)

    Weilenmann, Mirjam; Colbeck, Roger

    2016-10-01

    Bell's theorem shows that our intuitive understanding of causation must be overturned in light of quantum correlations. Nevertheless, quantum mechanics does not permit signaling and hence a notion of cause remains. Understanding this notion is not only important at a fundamental level, but also for technological applications such as key distribution and randomness expansion. It has recently been shown that a useful way to decide which classical causal structures could give rise to a given set of correlations is to use entropy vectors. These are vectors whose components are the entropies of all subsets of the observed variables in the causal structure. The entropy vector method employs causal relationships among the variables to restrict the set of possible entropy vectors. Here, we consider whether the same approach can lead to useful certificates of nonclassicality within a given causal structure. Surprisingly, we find that for a family of causal structures that includes the usual bipartite Bell structure they do not. For all members of this family, no function of the entropies of the observed variables gives such a certificate, in spite of the existence of nonclassical correlations. It is therefore necessary to look beyond entropy vectors to understand cause from a quantum perspective.

  18. Information theory-based decision support system for integrated design of multivariable hydrometric networks

    NASA Astrophysics Data System (ADS)

    Keum, Jongho; Coulibaly, Paulin

    2017-07-01

    Adequate and accurate hydrologic information from optimal hydrometric networks is an essential part of effective water resources management. Although the key hydrologic processes in the water cycle are interconnected, hydrometric networks (e.g., streamflow, precipitation, groundwater level) have been routinely designed individually. A decision support framework is proposed for integrated design of multivariable hydrometric networks. The proposed method is applied to design optimal precipitation and streamflow networks simultaneously. The epsilon-dominance hierarchical Bayesian optimization algorithm was combined with Shannon entropy of information theory to design and evaluate hydrometric networks. Specifically, the joint entropy from the combined networks was maximized to provide the most information, and the total correlation was minimized to reduce redundant information. To further optimize the efficiency between the networks, they were designed by maximizing the conditional entropy of the streamflow network given the information of the precipitation network. Compared to the traditional individual variable design approach, the integrated multivariable design method was able to determine more efficient optimal networks by avoiding the redundant stations. Additionally, four quantization cases were compared to evaluate their effects on the entropy calculations and the determination of the optimal networks. The evaluation results indicate that the quantization methods should be selected after careful consideration for each design problem since the station rankings and the optimal networks can change accordingly.

  19. Quantum entanglement and position-momentum entropic squeezing of a moving Lambda-type three-level atom interacting with a single-mode quantized field with intensity-dependent coupling

    NASA Astrophysics Data System (ADS)

    Faghihi, M. J.; Tavassoly, M. K.

    2013-07-01

    In this paper, we study the interaction between a moving Λ-type three-level atom and a single-mode cavity field in the presence of intensity-dependent atom-field coupling. After obtaining the state vector of the entire system explicitly, we study the nonclassical features of the system such as quantum entanglement, position-momentum entropic squeezing, quadrature squeezing and sub-Poissonian statistics. According to the obtained numerical results we illustrate that the squeezed period, the duration of entropy squeezing and the maximal squeezing can be controlled by choosing the appropriate nonlinearity function together with entering the atomic motion effect by the suitable selection of the field-mode structure parameter. Also, the atomic motion, as well as the nonlinearity function, leads to the oscillatory behaviour of the degree of entanglement between the atom and field.

  20. Fedosov Deformation Quantization as a BRST Theory

    NASA Astrophysics Data System (ADS)

    Grigoriev, M. A.; Lyakhovich, S. L.

    The relationship is established between the Fedosov deformation quantization of a general symplectic manifold and the BFV-BRST quantization of constrained dynamical systems. The original symplectic manifold M is presented as a second class constrained surface in the fibre bundle ?*ρM which is a certain modification of a usual cotangent bundle equipped with a natural symplectic structure. The second class system is converted into the first class one by continuation of the constraints into the extended manifold, being a direct sum of ?*ρM and the tangent bundle TM. This extended manifold is equipped with a nontrivial Poisson bracket which naturally involves two basic ingredients of Fedosov geometry: the symplectic structure and the symplectic connection. The constructed first class constrained theory, being equivalent to the original symplectic manifold, is quantized through the BFV-BRST procedure. The existence theorem is proven for the quantum BRST charge and the quantum BRST invariant observables. The adjoint action of the quantum BRST charge is identified with the Abelian Fedosov connection while any observable, being proven to be a unique BRST invariant continuation for the values defined in the original symplectic manifold, is identified with the Fedosov flat section of the Weyl bundle. The Fedosov fibrewise star multiplication is thus recognized as a conventional product of the quantum BRST invariant observables.

  1. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  2. Chemical library subset selection algorithms: a unified derivation using spatial statistics.

    PubMed

    Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F

    2002-01-01

    If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.

  3. Quantization Of Temperature

    NASA Astrophysics Data System (ADS)

    O'Brien, Paul

    2017-01-01

    Max Plank did not quantize temperature. I will show that the Plank temperature violates the Plank scale. Plank stated that the Plank scale was Natures scale and independent of human construct. Also stating that even aliens would derive the same values. He made a huge mistake, because temperature is based on the Kelvin scale, which is man-made just like the meter and kilogram. He did not discover natures scale for the quantization of temperature. His formula is flawed, and his value is incorrect. Plank's calculation is Tp = c2Mp/Kb. The general form of this equation is T = E/Kb Why is this wrong? The temperature for a fixed amount of energy is dependent upon the volume it occupies. Using the correct formula involves specifying the radius of the volume in the form of (RE). This leads to an inequality and a limit that is equivalent to the Bekenstein Bound, but using temperature instead of entropy. Rewriting this equation as a limit defines both the maximum temperature and Boltzmann's constant. This will saturate any space-time boundary with maximum temperature and information density, also the minimum radius and entropy. The general form of the equation then becomes a limit in BH thermodynamics T <= (RE)/(λKb) .

  4. Radial quantization of the 3d CFT and the higher spin/vector model duality

    NASA Astrophysics Data System (ADS)

    Hu, Shan; Li, Tianjun

    2014-10-01

    We study the radial quantization of the 3dO(N) vector model. We calculate the higher spin charges whose commutation relations give the higher spin algebra. The Fock states of higher spin gravity in AdS4 are realized as the states in the 3d CFT. The dynamical information is encoded in their inner products. This serves as the simplest explicit demonstration of the CFT definition for the quantum gravity.

  5. Wavelet subband coding of computer simulation output using the A++ array class library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.

    1995-07-01

    The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using amore » bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.« less

  6. Speech coding at low to medium bit rates

    NASA Astrophysics Data System (ADS)

    Leblanc, Wilfred Paul

    1992-09-01

    Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.

  7. Tripartite entanglement dynamics and entropic squeezing of a three-level atom interacting with a bimodal cavity field

    NASA Astrophysics Data System (ADS)

    Faghihi, M. J.; Tavassoly, M. K.; Bagheri Harouni, M.

    2014-04-01

    In this paper, we study the interaction between a Λ-type three-level atom and two quantized electromagnetic fields which are simultaneously injected in a bichromatic cavity surrounded by a Kerr medium in the presence of field-field interaction (parametric down conversion) and detuning parameters. By applying a canonical transformation, the introduced model is reduced to a well-known form of the generalized Jaynes-Cummings model. Under particular initial conditions which may be prepared for the atom and the field, the time evolution of the state vector of the entire system is analytically evaluated. Then, the dynamics of the atom is studied through the evolution of the atomic population inversion. In addition, two different measures of entanglement between the tripartite system (three entities make the system: two field modes and one atom), i.e., von Neumann and linear entropy are investigated. Also, two kinds of entropic uncertainty relations, from which entropy squeezing can be obtained, are discussed. In each case, the influences of the detuning parameters and Kerr medium on the above nonclassicality features are analyzed in detail via numerical results. It is illustrated that the amount of the above-mentioned physical phenomena can be tuned by choosing the evolved parameters, appropriately.

  8. Entropy of black holes in N=2 supergravity

    NASA Astrophysics Data System (ADS)

    Chatterjee, A.

    2018-07-01

    Using the formalism of isolated horizons, we construct space of solutions of asymptotically flat extremal black holes in N=2 pure supergravity in 4 dimensions. We prove that the laws of black hole mechanics hold for these black holes. Further, restricting to constant area phase space, we show that the spherical horizons admit a Chern-Simons theory. Standard way of quantizing this topological theory and counting states confirms that entropy is indeed proportional to the area of horizon.

  9. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Treesearch

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  10. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    PubMed

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  11. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  12. Bit-wise arithmetic coding for data compression

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  13. True random bit generators based on current time series of contact glow discharge electrolysis

    NASA Astrophysics Data System (ADS)

    Rojas, Andrea Espinel; Allagui, Anis; Elwakil, Ahmed S.; Alawadhi, Hussain

    2018-05-01

    Random bit generators (RBGs) in today's digital information and communication systems employ a high rate physical entropy sources such as electronic, photonic, or thermal time series signals. However, the proper functioning of such physical systems is bound by specific constrains that make them in some cases weak and susceptible to external attacks. In this study, we show that the electrical current time series of contact glow discharge electrolysis, which is a dc voltage-powered micro-plasma in liquids, can be used for generating random bit sequences in a wide range of high dc voltages. The current signal is quantized into a binary stream by first using a simple moving average function which makes the distribution centered around zero, and then applying logical operations which enables the binarized data to pass all tests in industry-standard randomness test suite by the National Institute of Standard Technology. Furthermore, the robustness of this RBG against power supply attacks has been examined and verified.

  14. Integrated design of multivariable hydrometric networks using entropy theory with a multiobjective optimization approach

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Hwang, T.; Vose, J. M.; Martin, K. L.; Band, L. E.

    2016-12-01

    Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.

  15. Integrated design of multivariable hydrometric networks using entropy theory with a multiobjective optimization approach

    NASA Astrophysics Data System (ADS)

    Keum, J.; Coulibaly, P. D.

    2017-12-01

    Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.

  16. Automatic event detection in low SNR microseismic signals based on multi-scale permutation entropy and a support vector machine

    NASA Astrophysics Data System (ADS)

    Jia, Rui-Sheng; Sun, Hong-Mei; Peng, Yan-Jun; Liang, Yong-Quan; Lu, Xin-Ming

    2017-07-01

    Microseismic monitoring is an effective means for providing early warning of rock or coal dynamical disasters, and its first step is microseismic event detection, although low SNR microseismic signals often cannot effectively be detected by routine methods. To solve this problem, this paper presents permutation entropy and a support vector machine to detect low SNR microseismic events. First, an extraction method of signal features based on multi-scale permutation entropy is proposed by studying the influence of the scale factor on the signal permutation entropy. Second, the detection model of low SNR microseismic events based on the least squares support vector machine is built by performing a multi-scale permutation entropy calculation for the collected vibration signals, constructing a feature vector set of signals. Finally, a comparative analysis of the microseismic events and noise signals in the experiment proves that the different characteristics of the two can be fully expressed by using multi-scale permutation entropy. The detection model of microseismic events combined with the support vector machine, which has the features of high classification accuracy and fast real-time algorithms, can meet the requirements of online, real-time extractions of microseismic events.

  17. Spectrally efficient digitized radio-over-fiber system with k-means clustering-based multidimensional quantization.

    PubMed

    Zhang, Lu; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Popov, Sergei; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia

    2018-04-01

    We propose a spectrally efficient digitized radio-over-fiber (D-RoF) system by grouping highly correlated neighboring samples of the analog signals into multidimensional vectors, where the k-means clustering algorithm is adopted for adaptive quantization. A 30  Gbit/s D-RoF system is experimentally demonstrated to validate the proposed scheme, reporting a carrier aggregation of up to 40 100 MHz orthogonal frequency division multiplexing (OFDM) channels with quadrate amplitude modulation (QAM) order of 4 and an aggregation of 10 100 MHz OFDM channels with a QAM order of 16384. The equivalent common public radio interface rates from 37 to 150  Gbit/s are supported. Besides, the error vector magnitude (EVM) of 8% is achieved with the number of quantization bits of 4, and the EVM can be further reduced to 1% by increasing the number of quantization bits to 7. Compared with conventional pulse coding modulation-based D-RoF systems, the proposed D-RoF system improves the signal-to-noise-ratio up to ∼9  dB and greatly reduces the EVM, given the same number of quantization bits.

  18. Vector quantizer based on brightness maps for image compression with the polynomial transform

    NASA Astrophysics Data System (ADS)

    Escalante-Ramirez, Boris; Moreno-Gutierrez, Mauricio; Silvan-Cardenas, Jose L.

    2002-11-01

    We present a vector quantization scheme acting on brightness fields based on distance/distortion criteria correspondent with psycho-visual aspects. These criteria quantify sensorial distortion between vectors that represent either portions of a digital image or alternatively, coefficients of a transform-based coding system. In the latter case, we use an image representation model, namely the Hermite transform, that is based on some of the main perceptual characteristics of the human vision system (HVS) and in their response to light stimulus. Energy coding in the brightness domain, determination of local structure, code-book training and local orientation analysis are all obtained by means of the Hermite transform. This paper, for thematic reasons, is divided in four sections. The first one will shortly highlight the importance of having newer and better compression algorithms. This section will also serve to explain briefly the most relevant characteristics of the HVS, advantages and disadvantages related with the behavior of our vision in front of ocular stimulus. The second section shall go through a quick review of vector quantization techniques, focusing their performance on image treatment, as a preview for the image vector quantizer compressor actually constructed in section 5. Third chapter was chosen to concentrate the most important data gathered on brightness models. The building of this so-called brightness maps (quantification of the human perception on the visible objects reflectance), in a bi-dimensional model, will be addressed here. The Hermite transform, a special case of polynomial transforms, and its usefulness, will be treated, in an applicable discrete form, in the fourth chapter. As we have learned from previous works 1, Hermite transform has showed to be a useful and practical solution to efficiently code the energy within an image block, deciding which kind of quantization is to be used upon them (whether scalar or vector). It will also be a unique tool to structurally classify the image block within a given lattice. This particular operation intends to be one of the main contributions of this work. The fifth section will fuse the proposals derived from the study of the three main topics- addressed in the last sections- in order to propose an image compression model that takes advantage of vector quantizers inside the brightness transformed domain to determine the most important structures, finding the energy distribution inside the Hermite domain. Sixth and last section will show some results obtained while testing the coding-decoding model. The guidelines to evaluate the image compressing performance were the compression ratio, SNR and psycho-visual quality. Some conclusions derived from the research and possible unexplored paths will be shown on this section as well.

  19. Progressive Vector Quantization on a massively parallel SIMD machine with application to multispectral image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.

  20. Topological charge quantization via path integration: An application of the Kustaanheimo-Stiefel transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inomata, A.; Junker, G.; Wilson, R.

    1993-08-01

    The unified treatment of the Dirac monopole, the Schwinger monopole, and the Aharonov-Bahn problem by Barut and Wilson is revisited via a path integral approach. The Kustaanheimo-Stiefel transformation of space and time is utilized to calculate the path integral for a charged particle in the singular vector potential. In the process of dimensional reduction, a topological charge quantization rule is derived, which contains Dirac's quantization condition as a special case. 32 refs.

  1. Generalized centripetal force law and quantization of motion constrained on 2D surfaces

    NASA Astrophysics Data System (ADS)

    Liu, Q. H.; Zhang, J.; Lian, D. K.; Hu, L. D.; Li, Z.

    2017-03-01

    For a particle of mass μ moves on a 2D surface f(x) = 0 embedded in 3D Euclidean space of coordinates x, there is an open and controversial problem whether the Dirac's canonical quantization scheme for the constrained motion allows for the geometric potential that has been experimentally confirmed. We note that the Dirac's scheme hypothesizes that the symmetries indicated by classical brackets among positions x and momenta p and Hamiltonian Hc remain in quantum mechanics, i.e., the following Dirac brackets [ x ,Hc ] D and [ p ,Hc ] D holds true after quantization, in addition to the fundamental ones [ x , x ] D, [ x , p ] D and [ p , p ] D. This set of hypotheses implies that the Hamiltonian operator is simultaneously determined during the quantization. The quantum mechanical relations corresponding to the classical mechanical ones p / μ =[ x ,Hc ] D directly give the geometric momenta. The time t derivative of the momenta p ˙ =[ p ,Hc ] D in classical mechanics is in fact the generalized centripetal force law for particle on the 2D surface, which in quantum mechanics permits both the geometric momenta and the geometric potential.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Essman, Eric P.; Aganagic, Mina; Okuda, Takuya

    We study quantum entanglements of baby universes which appear in non-perturbative corrections to the OSV formula for the entropy of extremal black holes in type IIA string theory compactified on the local Calabi-Yau manifold defined as a rank 2 vector bundle over an arbitrary genus G Riemann surface. This generalizes the result for G=1 in hep-th/0504221. Non-perturbative terms can be organized into a sum over contributions from baby universes, and the total wave-function is their coherent superposition in the third quantized Hilbert space. We find that half of the universes preserve one set of supercharges while the other half preservemore » a different set, making the total universe stable but non-BPS. The parent universe generates baby universes by brane/anti-brane pair creation, and baby universes are correlated by conservation of non-normalizable D-brane charges under the process. There are no other source of entanglement of baby universes, and all possible states are superposed with the equal weight.« less

  3. Detecting double compressed MPEG videos with the same quantization matrix and synchronized group of pictures structure

    NASA Astrophysics Data System (ADS)

    Aghamaleki, Javad Abbasi; Behrad, Alireza

    2018-01-01

    Double compression detection is a crucial stage in digital image and video forensics. However, the detection of double compressed videos is challenging when the video forger uses the same quantization matrix and synchronized group of pictures (GOP) structure during the recompression history to conceal tampering effects. A passive approach is proposed for detecting double compressed MPEG videos with the same quantization matrix and synchronized GOP structure. To devise the proposed algorithm, the effects of recompression on P frames are mathematically studied. Then, based on the obtained guidelines, a feature vector is proposed to detect double compressed frames on the GOP level. Subsequently, sparse representations of the feature vectors are used for dimensionality reduction and enrich the traces of recompression. Finally, a support vector machine classifier is employed to detect and localize double compression in temporal domain. The experimental results show that the proposed algorithm achieves the accuracy of more than 95%. In addition, the comparisons of the results of the proposed method with those of other methods reveal the efficiency of the proposed algorithm.

  4. Fuzzy spaces topology change and BH thermodynamics

    NASA Astrophysics Data System (ADS)

    Silva, C. A. S.; Landim, R. R.

    2014-03-01

    What is the ultimate fate of something that falls into a black hole? From this question arises one of the most intricate problems of modern theoretical physics: the black hole information loss paradox. Bekenstein and Hawking have been shown that the entropy in a black hole is proportional to the surface area of its event horizon, which should be quantized in a multiple of the Planck area. This led G.'t Hooft and L. Susskind to propose the holographic principle which states that all the information inside the black hole can be stored on its event horizon. From this results, one may think if the solution to the information paradox could lies in the quantum properties of the black hole horizon. One way to quantize the event horizon is to see it as a fuzzy sphere, which posses a closed relation with Hopf algebras. This relation makes possible a topology change process where a fuzzy sphere splits in two others. In this work it will be shown that, if one quantize the black hole event horizon as a fuzzy sphere taking into account its quantum symmetry properties, a topology change process to black holes can be defined without break unitarity or locality, and we can obtain a possible solution to the information paradox. Moreover, we show that this model can explain the origin of the black hole entropy, and why black holes obey a generalized second law of thermodynamics.

  5. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  6. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  7. Prediction-guided quantization for video tone mapping

    NASA Astrophysics Data System (ADS)

    Le Dauphin, Agnès.; Boitard, Ronan; Thoreau, Dominique; Olivier, Yannick; Francois, Edouard; LeLéannec, Fabrice

    2014-09-01

    Tone Mapping Operators (TMOs) compress High Dynamic Range (HDR) content to address Low Dynamic Range (LDR) displays. However, before reaching the end-user, this tone mapped content is usually compressed for broadcasting or storage purposes. Any TMO includes a quantization step to convert floating point values to integer ones. In this work, we propose to adapt this quantization, in the loop of an encoder, to reduce the entropy of the tone mapped video content. Our technique provides an appropriate quantization for each mode of both the Intra and Inter-prediction that is performed in the loop of a block-based encoder. The mode that minimizes a rate-distortion criterion uses its associated quantization to provide integer values for the rest of the encoding process. The method has been implemented in HEVC and was tested over two different scenarios: the compression of tone mapped LDR video content (using the HM10.0) and the compression of perceptually encoded HDR content (HM14.0). Results show an average bit-rate reduction under the same PSNR for all the sequences and TMO considered of 20.3% and 27.3% for tone mapped content and 2.4% and 2.7% for HDR content.

  8. Constraints on operator ordering from third quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohkuwa, Yoshiaki; Faizal, Mir, E-mail: f2mir@uwaterloo.ca; Ezawa, Yasuo

    2016-02-15

    In this paper, we analyse the Wheeler–DeWitt equation in the third quantized formalism. We will demonstrate that for certain operator ordering, the early stages of the universe are dominated by quantum fluctuations, and the universe becomes classical at later stages during the cosmic expansion. This is physically expected, if the universe is formed from quantum fluctuations in the third quantized formalism. So, we will argue that this physical requirement can be used to constrain the form of the operator ordering chosen. We will explicitly demonstrate this to be the case for two different cosmological models.

  9. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  10. On Fock-space representations of quantized enveloping algebras related to noncommutative differential geometry

    NASA Astrophysics Data System (ADS)

    Jurčo, B.; Schlieker, M.

    1995-07-01

    In this paper explicitly natural (from the geometrical point of view) Fock-space representations (contragradient Verma modules) of the quantized enveloping algebras are constructed. In order to do so, one starts from the Gauss decomposition of the quantum group and introduces the differential operators on the corresponding q-deformed flag manifold (assumed as a left comodule for the quantum group) by a projection to it of the right action of the quantized enveloping algebra on the quantum group. Finally, the representatives of the elements of the quantized enveloping algebra corresponding to the left-invariant vector fields on the quantum group are expressed as first-order differential operators on the q-deformed flag manifold.

  11. Generic isolated horizons in loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Beetle, Christopher; Engle, Jonathan

    2010-12-01

    Isolated horizons model equilibrium states of classical black holes. A detailed quantization, starting from a classical phase space restricted to spherically symmetric horizons, exists in the literature and has since been extended to axisymmetry. This paper extends the quantum theory to horizons of arbitrary shape. Surprisingly, the Hilbert space obtained by quantizing the full phase space of all generic horizons with a fixed area is identical to that originally found in spherical symmetry. The entropy of a large horizon remains one-quarter its area, with the Barbero-Immirzi parameter retaining its value from symmetric analyses. These results suggest a reinterpretation of the intrinsic quantum geometry of the horizon surface.

  12. Particle localization, spinor two-valuedness, and Fermi quantization of tensor systems

    NASA Technical Reports Server (NTRS)

    Reifler, Frank; Morris, Randall

    1994-01-01

    Recent studies of particle localization shows that square-integrable positive energy bispinor fields in a Minkowski space-time cannot be physically distinguished from constrained tensor fields. In this paper we generalize this result by characterizing all classical tensor systems, which admit Fermi quantization, as those having unitary Lie-Poisson brackets. Examples include Euler's tensor equation for a rigid body and Dirac's equation in tensor form.

  13. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    NASA Astrophysics Data System (ADS)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  14. Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets.

    PubMed

    Demartines, P; Herault, J

    1997-01-01

    We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.

  15. Multipurpose image watermarking algorithm based on multistage vector quantization.

    PubMed

    Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He

    2005-06-01

    The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.

  16. Distance learning in discriminative vector quantization.

    PubMed

    Schneider, Petra; Biehl, Michael; Hammer, Barbara

    2009-10-01

    Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.

  17. Using a binaural biomimetic array to identify bottom objects ensonified by echolocating dolphins

    USGS Publications Warehouse

    Heiweg, D.A.; Moore, P.W.; Martin, S.W.; Dankiewicz, L.A.

    2006-01-01

    The development of a unique dolphin biomimetic sonar produced data that were used to study signal processing methods for object identification. Echoes from four metallic objects proud on the bottom, and a substrate-only condition, were generated by bottlenose dolphins trained to ensonify the targets in very shallow water. Using the two-element ('binaural') receive array, object echo spectra were collected and submitted for identification to four neural network architectures. Identification accuracy was evaluated over two receive array configurations, and five signal processing schemes. The four neural networks included backpropagation, learning vector quantization, genetic learning and probabilistic network architectures. The processing schemes included four methods that capitalized on the binaural data, plus a monaural benchmark process. All the schemes resulted in above-chance identification accuracy when applied to learning vector quantization and backpropagation. Beam-forming or concatenation of spectra from both receive elements outperformed the monaural benchmark, with higher sensitivity and lower bias. Ultimately, best object identification performance was achieved by the learning vector quantization network supplied with beam-formed data. The advantages of multi-element signal processing for object identification are clearly demonstrated in this development of a first-ever dolphin biomimetic sonar. ?? 2006 IOP Publishing Ltd.

  18. A BRST formulation for the conic constrained particle

    NASA Astrophysics Data System (ADS)

    Barbosa, Gabriel D.; Thibes, Ronaldo

    2018-04-01

    We describe the gauge invariant BRST formulation of a particle constrained to move in a general conic. The model considered constitutes an explicit example of an originally second-class system which can be quantized within the BRST framework. We initially impose the conic constraint by means of a Lagrange multiplier leading to a consistent second-class system which generalizes previous models studied in the literature. After calculating the constraint structure and the corresponding Dirac brackets, we introduce a suitable first-order Lagrangian, the resulting modified system is then shown to be gauge invariant. We proceed to the extended phase space introducing fermionic ghost variables, exhibiting the BRST symmetry transformations and writing the Green’s function generating functional for the BRST quantized model.

  19. Random versus maximum entropy models of neural population activity

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse; Obuchi, Tomoyuki; Mora, Thierry

    2017-04-01

    The principle of maximum entropy provides a useful method for inferring statistical mechanics models from observations in correlated systems, and is widely used in a variety of fields where accurate data are available. While the assumptions underlying maximum entropy are intuitive and appealing, its adequacy for describing complex empirical data has been little studied in comparison to alternative approaches. Here, data from the collective spiking activity of retinal neurons is reanalyzed. The accuracy of the maximum entropy distribution constrained by mean firing rates and pairwise correlations is compared to a random ensemble of distributions constrained by the same observables. For most of the tested networks, maximum entropy approximates the true distribution better than the typical or mean distribution from that ensemble. This advantage improves with population size, with groups as small as eight being almost always better described by maximum entropy. Failure of maximum entropy to outperform random models is found to be associated with strong correlations in the population.

  20. Probabilistic distance-based quantizer design for distributed estimation

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Hak

    2016-12-01

    We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.

  1. Hydrodynamical model of anisotropic, polarized turbulent superfluids. I: constraints for the fluxes

    NASA Astrophysics Data System (ADS)

    Mongiovì, Maria Stella; Restuccia, Liliana

    2018-02-01

    This work is the first of a series of papers devoted to the study of the influence of the anisotropy and polarization of the tangle of quantized vortex lines in superfluid turbulence. A thermodynamical model of inhomogeneous superfluid turbulence previously formulated is here extended, to take into consideration also these effects. The model chooses as thermodynamic state vector the density, the velocity, the energy density, the heat flux, and a complete vorticity tensor field, including its symmetric traceless part and its antisymmetric part. The relations which constrain the constitutive quantities are deduced from the second principle of thermodynamics using the Liu procedure. The results show that the presence of anisotropy and polarization in the vortex tangle affects in a substantial way the dynamics of the heat flux, and allow us to give a physical interpretation of the vorticity tensor here introduced, and to better describe the internal structure of a turbulent superfluid.

  2. Quantized kernel least mean square algorithm.

    PubMed

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  3. Low bit rate coding of Earth science images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1993-01-01

    In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.

  4. Pipeline synthetic aperture radar data compression utilizing systolic binary tree-searched architecture for vector quantization

    NASA Technical Reports Server (NTRS)

    Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)

    1995-01-01

    A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.

  5. A fingerprint key binding algorithm based on vector quantization and error correction

    NASA Astrophysics Data System (ADS)

    Li, Liang; Wang, Qian; Lv, Ke; He, Ning

    2012-04-01

    In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.

  6. Application of Classification Models to Pharyngeal High-Resolution Manometry

    ERIC Educational Resources Information Center

    Mielens, Jason D.; Hoffman, Matthew R.; Ciucci, Michelle R.; McCulloch, Timothy M.; Jiang, Jack J.

    2012-01-01

    Purpose: The authors present 3 methods of performing pattern recognition on spatiotemporal plots produced by pharyngeal high-resolution manometry (HRM). Method: Classification models, including the artificial neural networks (ANNs) multilayer perceptron (MLP) and learning vector quantization (LVQ), as well as support vector machines (SVM), were…

  7. A Maximal Entropy Distribution Derivation of the Sharma-Taneja-Mittal Entropic Form

    NASA Astrophysics Data System (ADS)

    Scarfone, Antonio M.

    In this letter we derive the distribution maximizing the Sharma-Taneja-Mittal entropy under certain constrains by using an information inequality satisfied by the Br`egman divergence associated to this entropic form. The resulting maximal entropy distribution coincides with the one derived from the calculus according to the maximal entropy principle à la Jaynes.

  8. Automatic detection of voice impairments by means of short-term cepstral parameters and neural network based detectors.

    PubMed

    Godino-Llorente, J I; Gómez-Vilda, P

    2004-02-01

    It is well known that vocal and voice diseases do not necessarily cause perceptible changes in the acoustic voice signal. Acoustic analysis is a useful tool to diagnose voice diseases being a complementary technique to other methods based on direct observation of the vocal folds by laryngoscopy. Through the present paper two neural-network based classification approaches applied to the automatic detection of voice disorders will be studied. Structures studied are multilayer perceptron and learning vector quantization fed using short-term vectors calculated accordingly to the well-known Mel Frequency Coefficient cepstral parameterization. The paper shows that these architectures allow the detection of voice disorders--including glottic cancer--under highly reliable conditions. Within this context, the Learning Vector quantization methodology demonstrated to be more reliable than the multilayer perceptron architecture yielding 96% frame accuracy under similar working conditions.

  9. Influence of conversion on the location of points and lines: The change of location entropy and the probability of a vector point inside the converted grid point

    NASA Astrophysics Data System (ADS)

    Chen, Nan

    2018-03-01

    Conversion of points or lines from vector to grid format, or vice versa, is the first operation required for most spatial analysis. Conversion, however, usually causes the location of points or lines to change, which influences the reliability of the results of spatial analysis or even results in analysis errors. The purpose of this paper is to evaluate the change of the location of points and lines during conversion using the concepts of probability and entropy. This paper shows that when a vector point is converted to a grid point, the vector point may be outside or inside the grid point. This paper deduces a formula for computing the probability that the vector point is inside the grid point. It was found that the probability increased with the side length of the grid and with the variances of the coordinates of the vector point. In addition, the location entropy of points and lines are defined in this paper. Formulae for computing the change of the location entropy during conversion are deduced. The probability mentioned above and the change of location entropy may be used to evaluate the location reliability of points and lines in Geographic Information Systems and may be used to choose an appropriate range of the side length of grids before conversion. The results of this study may help scientists and users to avoid mistakes caused by the change of location during conversion as well as in spatial decision and analysis.

  10. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  11. Entanglement Criteria of Two Two-Level Atoms Interacting with Two Coupled Modes

    NASA Astrophysics Data System (ADS)

    Baghshahi, Hamid Reza; Tavassoly, Mohammad Kazem; Faghihi, Mohammad Javad

    2015-08-01

    In this paper, we study the interaction between two two-level atoms and two coupled modes of a quantized radiation field in the form of parametric frequency converter injecting within an optical cavity enclosed by a medium with Kerr nonlinearity. It is demonstrated that, by applying the Bogoliubov-Valatin canonical transformation, the introduced model is reduced to a well-known form of the generalized Jaynes-Cummings model. Then, under particular initial conditions for the atoms (in a coherent superposition of its ground and upper states) and the fields (in a standard coherent state) which may be prepared, the time evolution of state vector of the entire system is analytically evaluated. In order to understand the degree of entanglement between subsystems (atom-field and atom-atom), the dynamics of entanglement through different measures, namely, von Neumann reduced entropy, concurrence and negativity is evaluated. In each case, the effects of Kerr nonlinearity and detuning parameter on the above measures are numerically analyzed, in detail. It is illustrated that the amount of entanglement can be tuned by choosing the evolved parameters, appropriately.

  12. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    NASA Astrophysics Data System (ADS)

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  13. Vector coding of wavelet-transformed images

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Zhi, Cheng; Zhou, Yuanhua

    1998-09-01

    Wavelet, as a brand new tool in signal processing, has got broad recognition. Using wavelet transform, we can get octave divided frequency band with specific orientation which combines well with the properties of Human Visual System. In this paper, we discuss the classified vector quantization method for multiresolution represented image.

  14. Permutation modulation for quantization and information reconciliation in CV-QKD systems

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar

    2017-08-01

    This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal to Noise Ratio (SNR) exasperating the problem. Here we propose to use Permutation Modulation (PM) as a means of quantization of Gaussian vectors at Alice and Bob over a d-dimensional space with d ≫ 1. The goal is to achieve the necessary coding efficiency to extend the achievable range of continuous variable QKD by quantizing over larger and larger dimensions. Fractional bit rate per sample is easily achieved using PM at very reasonable computational cost. Ordered statistics is used extensively throughout the development from generation of the seed vector in PM to analysis of error rates associated with the signs of the Gaussian samples at Alice and Bob as a function of the magnitude of the observed samples at Bob.

  15. Improved wavelet packet classification algorithm for vibrational intrusions in distributed fiber-optic monitoring systems

    NASA Astrophysics Data System (ADS)

    Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo

    2015-05-01

    An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.

  16. Robust fault tolerant control based on sliding mode method for uncertain linear systems with quantization.

    PubMed

    Hao, Li-Ying; Yang, Guang-Hong

    2013-09-01

    This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  17. HMM for hyperspectral spectrum representation and classification with endmember entropy vectors

    NASA Astrophysics Data System (ADS)

    Arabi, Samir Y. W.; Fernandes, David; Pizarro, Marco A.

    2015-10-01

    The Hyperspectral images due to its good spectral resolution are extensively used for classification, but its high number of bands requires a higher bandwidth in the transmission data, a higher data storage capability and a higher computational capability in processing systems. This work presents a new methodology for hyperspectral data classification that can work with a reduced number of spectral bands and achieve good results, comparable with processing methods that require all hyperspectral bands. The proposed method for hyperspectral spectra classification is based on the Hidden Markov Model (HMM) associated to each Endmember (EM) of a scene and the conditional probabilities of each EM belongs to each other EM. The EM conditional probability is transformed in EM vector entropy and those vectors are used as reference vectors for the classes in the scene. The conditional probability of a spectrum that will be classified is also transformed in a spectrum entropy vector, which is classified in a given class by the minimum ED (Euclidian Distance) among it and the EM entropy vectors. The methodology was tested with good results using AVIRIS spectra of a scene with 13 EM considering the full 209 bands and the reduced spectral bands of 128, 64 and 32. For the test area its show that can be used only 32 spectral bands instead of the original 209 bands, without significant loss in the classification process.

  18. Immirzi parameter without Immirzi ambiguity: Conformal loop quantization of scalar-tensor gravity

    NASA Astrophysics Data System (ADS)

    Veraguth, Olivier J.; Wang, Charles H.-T.

    2017-10-01

    Conformal loop quantum gravity provides an approach to loop quantization through an underlying conformal structure i.e. conformally equivalent class of metrics. The property that general relativity itself has no conformal invariance is reinstated with a constrained scalar field setting the physical scale. Conformally equivalent metrics have recently been shown to be amenable to loop quantization including matter coupling. It has been suggested that conformal geometry may provide an extended symmetry to allow a reformulated Immirzi parameter necessary for loop quantization to behave like an arbitrary group parameter that requires no further fixing as its present standard form does. Here, we find that this can be naturally realized via conformal frame transformations in scalar-tensor gravity. Such a theory generally incorporates a dynamical scalar gravitational field and reduces to general relativity when the scalar field becomes a pure gauge. In particular, we introduce a conformal Einstein frame in which loop quantization is implemented. We then discuss how different Immirzi parameters under this description may be related by conformal frame transformations and yet share the same quantization having, for example, the same area gaps, modulated by the scalar gravitational field.

  19. Face recognition algorithm using extended vector quantization histogram features.

    PubMed

    Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu

    2018-01-01

    In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.

  20. Soft learning vector quantization and clustering algorithms based on ordered weighted aggregation operators.

    PubMed

    Karayiannis, N B

    2000-01-01

    This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.

  1. A novel encoding scheme for effective biometric discretization: Linearly Separable Subcode.

    PubMed

    Lim, Meng-Hui; Teoh, Andrew Beng Jin

    2013-02-01

    Separability in a code is crucial in guaranteeing a decent Hamming-distance separation among the codewords. In multibit biometric discretization where a code is used for quantization-intervals labeling, separability is necessary for preserving distance dissimilarity when feature components are mapped from a discrete space to a Hamming space. In this paper, we examine separability of Binary Reflected Gray Code (BRGC) encoding and reveal its inadequacy in tackling interclass variation during the discrete-to-binary mapping, leading to a tradeoff between classification performance and entropy of binary output. To overcome this drawback, we put forward two encoding schemes exhibiting full-ideal and near-ideal separability capabilities, known as Linearly Separable Subcode (LSSC) and Partially Linearly Separable Subcode (PLSSC), respectively. These encoding schemes convert the conventional entropy-performance tradeoff into an entropy-redundancy tradeoff in the increase of code length. Extensive experimental results vindicate the superiority of our schemes over the existing encoding schemes in discretization performance. This opens up possibilities of achieving much greater classification performance with high output entropy.

  2. Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors

    DTIC Science & Technology

    2011-04-15

    funded by Mitsubishi Electric Research Laboratories. †ICTEAM Institute, ELEN Department, Université catholique de Louvain (UCL), B-1348 Louvain-la-Neuve...reduced to a simple comparator that tests for values above or below zero, enabling extremely simple, efficient, and fast quantization. A 1-bit quantizer is...these two terms appears to be significantly different, according to the previously discussed experiments. To test the hypothesis that this term is the key

  3. Firefly algorithm for cardinality constrained mean-variance portfolio optimization problem with entropy diversity constraint.

    PubMed

    Bacanin, Nebojsa; Tuba, Milan

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.

  4. Firefly Algorithm for Cardinality Constrained Mean-Variance Portfolio Optimization Problem with Entropy Diversity Constraint

    PubMed Central

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645

  5. Global synchronization of complex dynamical networks through digital communication with limited data rate.

    PubMed

    Wang, Yan-Wu; Bian, Tao; Xiao, Jiang-Wen; Wen, Changyun

    2015-10-01

    This paper studies the global synchronization of complex dynamical network (CDN) under digital communication with limited bandwidth. To realize the digital communication, the so-called uniform-quantizer-sets are introduced to quantize the states of nodes, which are then encoded and decoded by newly designed encoders and decoders. To meet the requirement of the bandwidth constraint, a scaling function is utilized to guarantee the quantizers having bounded inputs and thus achieving bounded real-time quantization levels. Moreover, a new type of vector norm is introduced to simplify the expression of the bandwidth limit. Through mathematical induction, a sufficient condition is derived to ensure global synchronization of the CDNs. The lower bound on the sum of the real-time quantization levels is analyzed for different cases. Optimization method is employed to relax the requirements on the network topology and to determine the minimum of such lower bound for each case, respectively. Simulation examples are also presented to illustrate the established results.

  6. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming

    2016-01-01

    We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research.

  7. VLSI realization of learning vector quantization with hardware/software co-design for different applications

    NASA Astrophysics Data System (ADS)

    An, Fengwei; Akazawa, Toshinobu; Yamasaki, Shogo; Chen, Lei; Jürgen Mattausch, Hans

    2015-04-01

    This paper reports a VLSI realization of learning vector quantization (LVQ) with high flexibility for different applications. It is based on a hardware/software (HW/SW) co-design concept for on-chip learning and recognition and designed as a SoC in 180 nm CMOS. The time consuming nearest Euclidean distance search in the LVQ algorithm’s competition layer is efficiently implemented as a pipeline with parallel p-word input. Since neuron number in the competition layer, weight values, input and output number are scalable, the requirements of many different applications can be satisfied without hardware changes. Classification of a d-dimensional input vector is completed in n × \\lceil d/p \\rceil + R clock cycles, where R is the pipeline depth, and n is the number of reference feature vectors (FVs). Adjustment of stored reference FVs during learning is done by the embedded 32-bit RISC CPU, because this operation is not time critical. The high flexibility is verified by the application of human detection with different numbers for the dimensionality of the FVs.

  8. Vector quantization for efficient coding of upper subbands

    NASA Technical Reports Server (NTRS)

    Zeng, W. J.; Huang, Y. F.

    1994-01-01

    This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.

  9. Particle on a torus knot: Constrained dynamics and semi-classical quantization in a magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Praloy, E-mail: praloydasdurgapur@gmail.com; Pramanik, Souvik, E-mail: souvick.in@gmail.com; Ghosh, Subir, E-mail: subirghosh20@gmail.com

    2016-11-15

    Kinematics and dynamics of a particle moving on a torus knot poses an interesting problem as a constrained system. In the first part of the paper we have derived the modified symplectic structure or Dirac brackets of the above model in Dirac’s Hamiltonian framework, both in toroidal and Cartesian coordinate systems. This algebra has been used to study the dynamics, in particular small fluctuations in motion around a specific torus. The spatial symmetries of the system have also been studied. In the second part of the paper we have considered the quantum theory of a charge moving in a torusmore » knot in the presence of a uniform magnetic field along the axis of the torus in a semiclassical quantization framework. We exploit the Einstein–Brillouin–Keller (EBK) scheme of quantization that is appropriate for multidimensional systems. Embedding of the knot on a specific torus is inherently two dimensional that gives rise to two quantization conditions. This shows that although the system, after imposing the knot condition reduces to a one dimensional system, even then it has manifested non-planar features which shows up again in the study of fractional angular momentum. Finally we compare the results obtained from EBK (multi-dimensional) and Bohr–Sommerfeld (single dimensional) schemes. The energy levels and fractional spin depend on the torus knot parameters that specifies its non-planar features. Interestingly, we show that there can be non-planar corrections to the planar anyon-like fractional spin.« less

  10. Dual and mixed nonsymmetric stress-based variational formulations for coupled thermoelastodynamics with second sound effect

    NASA Astrophysics Data System (ADS)

    Tóth, Balázs

    2018-03-01

    Some new dual and mixed variational formulations based on a priori nonsymmetric stresses will be developed for linearly coupled irreversible thermoelastodynamic problems associated with second sound effect according to the Lord-Shulman theory. Having introduced the entropy flux vector instead of the entropy field and defining the dissipation and the relaxation potential as the function of the entropy flux, a seven-field dual and mixed variational formulation will be derived from the complementary Biot-Hamilton-type variational principle, using the Lagrange multiplier method. The momentum-, the displacement- and the infinitesimal rotation vector, and the a priori nonsymmetric stress tensor, the temperature change, the entropy field and its flux vector are considered as the independent field variables of this formulation. In order to handle appropriately the six different groups of temporal prescriptions in the relaxed- and/or the strong form, two variational integrals will be incorporated into the seven-field functional. Then, eliminating the entropy from this formulation through the strong fulfillment of the constitutive relation for the temperature change with the use of the Legendre transformation between the enthalpy and Gibbs potential, a six-field dual and mixed action functional is obtained. As a further development, the elimination of the momentum- and the velocity vector from the six-field principle through the a priori satisfaction of the kinematic equation and the constitutive relation for the momentum vector leads to a five-field variational formulation. These principles are suitable for the transient analyses of the structures exposed to a thermal shock of short temporal domain or a large heat flux.

  11. Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!

    NASA Astrophysics Data System (ADS)

    Nutku, Yavuz

    2003-07-01

    Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems.

  12. Quantized Spectral Compressed Sensing: Cramer–Rao Bounds and Recovery Algorithms

    NASA Astrophysics Data System (ADS)

    Fu, Haoyu; Chi, Yuejie

    2018-06-01

    Efficient estimation of wideband spectrum is of great importance for applications such as cognitive radio. Recently, sub-Nyquist sampling schemes based on compressed sensing have been proposed to greatly reduce the sampling rate. However, the important issue of quantization has not been fully addressed, particularly for high-resolution spectrum and parameter estimation. In this paper, we aim to recover spectrally-sparse signals and the corresponding parameters, such as frequency and amplitudes, from heavy quantizations of their noisy complex-valued random linear measurements, e.g. only the quadrant information. We first characterize the Cramer-Rao bound under Gaussian noise, which highlights the trade-off between sample complexity and bit depth under different signal-to-noise ratios for a fixed budget of bits. Next, we propose a new algorithm based on atomic norm soft thresholding for signal recovery, which is equivalent to proximal mapping of properly designed surrogate signals with respect to the atomic norm that motivates spectral sparsity. The proposed algorithm can be applied to both the single measurement vector case, as well as the multiple measurement vector case. It is shown that under the Gaussian measurement model, the spectral signals can be reconstructed accurately with high probability, as soon as the number of quantized measurements exceeds the order of K log n, where K is the level of spectral sparsity and $n$ is the signal dimension. Finally, numerical simulations are provided to validate the proposed approaches.

  13. 15N backbone dynamics of the S-peptide from ribonuclease A in its free and S-protein bound forms: toward a site-specific analysis of entropy changes upon folding.

    PubMed Central

    Alexandrescu, A. T.; Rathgeb-Szabo, K.; Rumpel, K.; Jahnke, W.; Schulthess, T.; Kammerer, R. A.

    1998-01-01

    Backbone 15N relaxation parameters (R1, R2, 1H-15N NOE) have been measured for a 22-residue recombinant variant of the S-peptide in its free and S-protein bound forms. NMR relaxation data were analyzed using the "model-free" approach (Lipari & Szabo, 1982). Order parameters obtained from "model-free" simulations were used to calculate 1H-15N bond vector entropies using a recently described method (Yang & Kay, 1996), in which the form of the probability density function for bond vector fluctuations is derived from a diffusion-in-a-cone motional model. The average change in 1H-15N bond vector entropies for residues T3-S15, which become ordered upon binding of the S-peptide to the S-protein, is -12.6+/-1.4 J/mol.residue.K. 15N relaxation data suggest a gradient of decreasing entropy values moving from the termini toward the center of the free peptide. The difference between the entropies of the terminal and central residues is about -12 J/mol residue K, a value comparable to that of the average entropy change per residue upon complex formation. Similar entropy gradients are evident in NMR relaxation studies of other denatured proteins. Taken together, these observations suggest denatured proteins may contain entropic contributions from non-local interactions. Consequently, calculations that model the entropy of a residue in a denatured protein as that of a residue in a di- or tri-peptide, might over-estimate the magnitude of entropy changes upon folding. PMID:9521116

  14. Excess Entropy Production in Quantum System: Quantum Master Equation Approach

    NASA Astrophysics Data System (ADS)

    Nakajima, Satoshi; Tokura, Yasuhiro

    2017-12-01

    For open systems described by the quantum master equation (QME), we investigate the excess entropy production under quasistatic operations between nonequilibrium steady states. The average entropy production is composed of the time integral of the instantaneous steady entropy production rate and the excess entropy production. We propose to define average entropy production rate using the average energy and particle currents, which are calculated by using the full counting statistics with QME. The excess entropy production is given by a line integral in the control parameter space and its integrand is called the Berry-Sinitsyn-Nemenman (BSN) vector. In the weakly nonequilibrium regime, we show that BSN vector is described by ln \\breve{ρ }_0 and ρ _0 where ρ _0 is the instantaneous steady state of the QME and \\breve{ρ }_0 is that of the QME which is given by reversing the sign of the Lamb shift term. If the system Hamiltonian is non-degenerate or the Lamb shift term is negligible, the excess entropy production approximately reduces to the difference between the von Neumann entropies of the system. Additionally, we point out that the expression of the entropy production obtained in the classical Markov jump process is different from our result and show that these are approximately equivalent only in the weakly nonequilibrium regime.

  15. Efficient boundary hunting via vector quantization

    NASA Astrophysics Data System (ADS)

    Diamantini, Claudia; Panti, Maurizio

    2001-03-01

    A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.

  16. Evaluation of Raman spectra of human brain tumor tissue using the learning vector quantization neural network

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong

    2016-05-01

    The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.

  17. Reduced Order Podolsky Model

    NASA Astrophysics Data System (ADS)

    Thibes, Ronaldo

    2017-02-01

    We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution, we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.

  18. Spatially Invariant Vector Quantization: A pattern matching algorithm for multiple classes of image subject matter including pathology.

    PubMed

    Hipp, Jason D; Cheng, Jerome Y; Toner, Mehmet; Tompkins, Ronald G; Balis, Ulysses J

    2011-02-26

    HISTORICALLY, EFFECTIVE CLINICAL UTILIZATION OF IMAGE ANALYSIS AND PATTERN RECOGNITION ALGORITHMS IN PATHOLOGY HAS BEEN HAMPERED BY TWO CRITICAL LIMITATIONS: 1) the availability of digital whole slide imagery data sets and 2) a relative domain knowledge deficit in terms of application of such algorithms, on the part of practicing pathologists. With the advent of the recent and rapid adoption of whole slide imaging solutions, the former limitation has been largely resolved. However, with the expectation that it is unlikely for the general cohort of contemporary pathologists to gain advanced image analysis skills in the short term, the latter problem remains, thus underscoring the need for a class of algorithm that has the concurrent properties of image domain (or organ system) independence and extreme ease of use, without the need for specialized training or expertise. In this report, we present a novel, general case pattern recognition algorithm, Spatially Invariant Vector Quantization (SIVQ), that overcomes the aforementioned knowledge deficit. Fundamentally based on conventional Vector Quantization (VQ) pattern recognition approaches, SIVQ gains its superior performance and essentially zero-training workflow model from its use of ring vectors, which exhibit continuous symmetry, as opposed to square or rectangular vectors, which do not. By use of the stochastic matching properties inherent in continuous symmetry, a single ring vector can exhibit as much as a millionfold improvement in matching possibilities, as opposed to conventional VQ vectors. SIVQ was utilized to demonstrate rapid and highly precise pattern recognition capability in a broad range of gross and microscopic use-case settings. With the performance of SIVQ observed thus far, we find evidence that indeed there exist classes of image analysis/pattern recognition algorithms suitable for deployment in settings where pathologists alone can effectively incorporate their use into clinical workflow, as a turnkey solution. We anticipate that SIVQ, and other related class-independent pattern recognition algorithms, will become part of the overall armamentarium of digital image analysis approaches that are immediately available to practicing pathologists, without the need for the immediate availability of an image analysis expert.

  19. Treatment of constraints in the stochastic quantization method and covariantized Langevin equation

    NASA Astrophysics Data System (ADS)

    Ikegami, Kenji; Kimura, Tadahiko; Mochizuki, Riuji

    1993-04-01

    We study the treatment of the constraints in the stochastic quantization method. We improve the treatment of the stochastic consistency condition proposed by Namiki et al. by suitably taking into account the Ito calculus. Then we obtain an improved Langevi equation and the Fokker-Planck equation which naturally leads to the correct path integral quantization of the constrained system as the stochastic equilibrium state. This treatment is applied to an O( N) non-linear α model and it is shown that singular terms appearing in the improved Langevin equation cancel out the σ n(O) divergences in one loop order. We also ascertain that the above Langevin equation, rewritten in terms of idependent variables, is actually equivalent to the one in the general-coordinate transformation covariant and vielbein-rotation invariant formalish.

  20. Compression of digital images over local area networks. Appendix 1: Item 3. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Gorjala, Bhargavi

    1991-01-01

    Differential Pulse Code Modulation (DPCM) has been used with speech for many years. It has not been as successful for images because of poor edge performance. The only corruption in DPC is quantizer error but this corruption becomes quite large in the region of an edge because of the abrupt changes in the statistics of the signal. We introduce two improved DPCM schemes; Edge correcting DPCM and Edge Preservation Differential Coding. These two coding schemes will detect the edges and take action to correct them. In an Edge Correcting scheme, the quantizer error for an edge is encoded using a recursive quantizer with entropy coding and sent to the receiver as side information. In an Edge Preserving scheme, when the quantizer input falls in the overload region, the quantizer error is encoded and sent to the receiver repeatedly until the quantizer input falls in the inner levels. Therefore these coding schemes increase the bit rate in the region of an edge and require variable rate channels. We implement these two variable rate coding schemes on a token wing network. Timed token protocol supports two classes of messages; asynchronous and synchronous. The synchronous class provides a pre-allocated bandwidth and guaranteed response time. The remaining bandwidth is dynamically allocated to the asynchronous class. The Edge Correcting DPCM is simulated by considering the edge information under the asynchronous class. For the simulation of the Edge Preserving scheme, the amount of information sent each time is fixed, but the length of the packet or the bit rate for that packet is chosen depending on the availability capacity. The performance of the network, and the performance of the image coding algorithms, is studied.

  1. Bubble Entropy: An Entropy Almost Free of Parameters.

    PubMed

    Manis, George; Aktaruzzaman, Md; Sassi, Roberto

    2017-11-01

    Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors. Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors.

  2. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  3. Binarized cross-approximate entropy in crowdsensing environment.

    PubMed

    Skoric, Tamara; Mohamoud, Omer; Milovanovic, Branislav; Japundzic-Zigon, Nina; Bajic, Dragana

    2017-01-01

    Personalised monitoring in health applications has been recognised as part of the mobile crowdsensing concept, where subjects equipped with sensors extract information and share them for personal or common benefit. Limited transmission resources impose the use of local analyses methodology, but this approach is incompatible with analytical tools that require stationary and artefact-free data. This paper proposes a computationally efficient binarised cross-approximate entropy, referred to as (X)BinEn, for unsupervised cardiovascular signal processing in environments where energy and processor resources are limited. The proposed method is a descendant of the cross-approximate entropy ((X)ApEn). It operates on binary, differentially encoded data series split into m-sized vectors. The Hamming distance is used as a distance measure, while a search for similarities is performed on the vector sets. The procedure is tested on rats under shaker and restraint stress, and compared to the existing (X)ApEn results. The number of processing operations is reduced. (X)BinEn captures entropy changes in a similar manner to (X)ApEn. The coding coarseness yields an adverse effect of reduced sensitivity, but it attenuates parameter inconsistency and binary bias. A special case of (X)BinEn is equivalent to Shannon's entropy. A binary conditional entropy for m =1 vectors is embedded into the (X)BinEn procedure. (X)BinEn can be applied to a single time series as an auto-entropy method, or to a pair of time series, as a cross-entropy method. Its low processing requirements makes it suitable for mobile, battery operated, self-attached sensing devices, with limited power and processor resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Exploiting Acoustic and Syntactic Features for Automatic Prosody Labeling in a Maximum Entropy Framework

    PubMed Central

    Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.

    2009-01-01

    In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083

  5. Coherent states for the relativistic harmonic oscillator

    NASA Technical Reports Server (NTRS)

    Aldaya, Victor; Guerrero, J.

    1995-01-01

    Recently we have obtained, on the basis of a group approach to quantization, a Bargmann-Fock-like realization of the Relativistic Harmonic Oscillator as well as a generalized Bargmann transform relating fock wave functions and a set of relativistic Hermite polynomials. Nevertheless, the relativistic creation and annihilation operators satisfy typical relativistic commutation relations of the Lie product (vector-z, vector-z(sup dagger)) approximately equals Energy (an SL(2,R) algebra). Here we find higher-order polarization operators on the SL(2,R) group, providing canonical creation and annihilation operators satisfying the Lie product (vector-a, vector-a(sup dagger)) = identity vector 1, the eigenstates of which are 'true' coherent states.

  6. [Identification of special quality eggs with NIR spectroscopy technology based on symbol entropy feature extraction method].

    PubMed

    Zhao, Yong; Hong, Wen-Xue

    2011-11-01

    Fast, nondestructive and accurate identification of special quality eggs is an urgent problem. The present paper proposed a new feature extraction method based on symbol entropy to identify near infrared spectroscopy of special quality eggs. The authors selected normal eggs, free range eggs, selenium-enriched eggs and zinc-enriched eggs as research objects and measured the near-infrared diffuse reflectance spectra in the range of 12 000-4 000 cm(-1). Raw spectra were symbolically represented with aggregation approximation algorithm and symbolic entropy was extracted as feature vector. An error-correcting output codes multiclass support vector machine classifier was designed to identify the spectrum. Symbolic entropy feature is robust when parameter changed and the highest recognition rate reaches up to 100%. The results show that the identification method of special quality eggs using near-infrared is feasible and the symbol entropy can be used as a new feature extraction method of near-infrared spectra.

  7. Recursive optimal pruning with applications to tree structured vector quantizers

    NASA Technical Reports Server (NTRS)

    Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen

    1992-01-01

    A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.

  8. RED: a set of molecular descriptors based on Renyi entropy.

    PubMed

    Delgado-Soler, Laura; Toral, Raul; Tomás, M Santos; Rubio-Martinez, Jaime

    2009-11-01

    New molecular descriptors, RED (Renyi entropy descriptors), based on the generalized entropies introduced by Renyi are presented. Topological descriptors based on molecular features have proven to be useful for describing molecular profiles. Renyi entropy is used as a variability measure to contract a feature-pair distribution composing the descriptor vector. The performance of RED descriptors was tested for the analysis of different sets of molecular distances, virtual screening, and pharmacological profiling. A free parameter of the Renyi entropy has been optimized for all the considered applications.

  9. Quantized Overcomplete Expansions: Analysis, Synthesis and Algorithms

    DTIC Science & Technology

    1995-07-01

    would be in the spirit of the Lempel - Ziv algorithm . The decoder would have to be aware of changes in the dictionary, but depending on the nature of the...37 3.4 A General Vector Compression Algorithm Based on Frames : : : : : : : : : : 40 ii 3.4.1 Design Considerations...x3.3. Along with exploring general properties of matching pursuit, we are interested in its application to compressing data vectors in RN. A general

  10. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  11. Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina

    2016-09-01

    The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.

  12. Using constrained information entropy to detect rare adverse drug reactions from medical forums.

    PubMed

    Yi Zheng; Chaowang Lan; Hui Peng; Jinyan Li

    2016-08-01

    Adverse drug reactions (ADRs) detection is critical to avoid malpractices yet challenging due to its uncertainty in pre-marketing review and the underreporting in post-marketing surveillance. To conquer this predicament, social media based ADRs detection methods have been proposed recently. However, existing researches are mostly co-occurrence based methods and face several issues, in particularly, leaving out the rare ADRs and unable to distinguish irrelevant ADRs. In this work, we introduce a constrained information entropy (CIE) method to solve these problems. CIE first recognizes the drug-related adverse reactions using a predefined keyword dictionary and then captures high- and low-frequency (rare) ADRs by information entropy. Extensive experiments on medical forums dataset demonstrate that CIE outperforms the state-of-the-art co-occurrence based methods, especially in rare ADRs detection.

  13. On a canonical quantization of 3D Anti de Sitter pure gravity

    NASA Astrophysics Data System (ADS)

    Kim, Jihun; Porrati, Massimo

    2015-10-01

    We perform a canonical quantization of pure gravity on AdS 3 using as a technical tool its equivalence at the classical level with a Chern-Simons theory with gauge group SL(2,{R})× SL(2,{R}) . We first quantize the theory canonically on an asymptotically AdS space -which is topologically the real line times a Riemann surface with one connected boundary. Using the "constrain first" approach we reduce canonical quantization to quantization of orbits of the Virasoro group and Kähler quantization of Teichmüller space. After explicitly computing the Kähler form for the torus with one boundary component and after extending that result to higher genus, we recover known results, such as that wave functions of SL(2,{R}) Chern-Simons theory are conformal blocks. We find new restrictions on the Hilbert space of pure gravity by imposing invariance under large diffeomorphisms and normalizability of the wave function. The Hilbert space of pure gravity is shown to be the target space of Conformal Field Theories with continuous spectrum and a lower bound on operator dimensions. A projection defined by topology changing amplitudes in Euclidean gravity is proposed. It defines an invariant subspace that allows for a dual interpretation in terms of a Liouville CFT. Problems and features of the CFT dual are assessed and a new definition of the Hilbert space, exempt from those problems, is proposed in the case of highly-curved AdS 3.

  14. Quantization of a U(1) gauged chiral boson in the Batalin-Fradkin-Vilkovisky scheme

    NASA Astrophysics Data System (ADS)

    Ghosh, Subir

    1994-03-01

    The scheme developed by Batalin, Fradkin, and Vilkovisky (BFV) to convert a second-class constrained system to a first-class one (having gauge invariance) is used in the Floreanini-Jackiw formulation of the chiral boson interacting with a U(1) gauge field. Explicit expressions of the BRST charge, the unitarizing Hamiltonian, and the BRST invariant effective action are provided and the full quantization is carried through. The spectra in both cases have been analyzed to show the presence of the proper chiral components explicitly. In the gauged model, Wess-Zumino terms in terms of the Batalin-Fradkin fields are identified.

  15. Quantization of a U(1) gauged chiral boson in the Batalin-Fradkin-Vilkovisky scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, S.

    1994-03-15

    The scheme developed by Batalin, Fradkin, and Vilkovisky (BFV) to convert a second-class constrained system to a first-class one (having gauge invariance) is used in the Floreanini-Jackiw formulation of the chiral boson interacting with a U(1) gauge field. Explicit expressions of the BRST charge, the unitarizing Hamiltonian, and the BRST invariant effective action are provided and the full quantization is carried through. The spectra in both cases have been analyzed to show the presence of the proper chiral components explicitly. In the gauged model, Wess-Zumino terms in terms of the Batalin-Fradkin fields are identified.

  16. Canonical quantization of general relativity in discrete space-times.

    PubMed

    Gambini, Rodolfo; Pullin, Jorge

    2003-01-17

    It has long been recognized that lattice gauge theory formulations, when applied to general relativity, conflict with the invariance of the theory under diffeomorphisms. We analyze discrete lattice general relativity and develop a canonical formalism that allows one to treat constrained theories in Lorentzian signature space-times. The presence of the lattice introduces a "dynamical gauge" fixing that makes the quantization of the theories conceptually clear, albeit computationally involved. The problem of a consistent algebra of constraints is automatically solved in our approach. The approach works successfully in other field theories as well, including topological theories. A simple cosmological application exhibits quantum elimination of the singularity at the big bang.

  17. Associative Pattern Recognition In Analog VLSI Circuits

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1995-01-01

    Winner-take-all circuit selects best-match stored pattern. Prototype cascadable very-large-scale integrated (VLSI) circuit chips built and tested to demonstrate concept of electronic associative pattern recognition. Based on low-power, sub-threshold analog complementary oxide/semiconductor (CMOS) VLSI circuitry, each chip can store 128 sets (vectors) of 16 analog values (vector components), vectors representing known patterns as diverse as spectra, histograms, graphs, or brightnesses of pixels in images. Chips exploit parallel nature of vector quantization architecture to implement highly parallel processing in relatively simple computational cells. Through collective action, cells classify input pattern in fraction of microsecond while consuming power of few microwatts.

  18. Transmitting Information by Propagation in an Ocean Waveguide: Computation of Acoustic Field Capacity

    DTIC Science & Technology

    2015-06-17

    progress, Eq. (4) is evaluated in terms of the differential entropy h. The integrals can be identified as differential entropy terms by expanding the log...all ran- dom vectors p with a given covariance matrix, the entropy of p is maximized when p is ZMCSCG since a normal distribution maximizes the... entropy over all distributions with the same covariance [9, 18], implying that this is the optimal distribution on s as well. In addition, of all the

  19. The gravity dual of Rényi entropy.

    PubMed

    Dong, Xi

    2016-08-12

    A remarkable yet mysterious property of black holes is that their entropy is proportional to the horizon area. This area law inspired the holographic principle, which was later realized concretely in gauge-gravity duality. In this context, entanglement entropy is given by the area of a minimal surface in a dual spacetime. However, discussions of area laws have been constrained to entanglement entropy, whereas a full understanding of a quantum state requires Rényi entropies. Here we show that all Rényi entropies satisfy a similar area law in holographic theories and are given by the areas of dual cosmic branes. This geometric prescription is a one-parameter generalization of the minimal surface prescription for entanglement entropy. Applying this we provide the first holographic calculation of mutual Rényi information between two disks of arbitrary dimension. Our results provide a framework for efficiently studying Rényi entropies and understanding entanglement structures in strongly coupled systems and quantum gravity.

  20. The gravity dual of Rényi entropy

    PubMed Central

    Dong, Xi

    2016-01-01

    A remarkable yet mysterious property of black holes is that their entropy is proportional to the horizon area. This area law inspired the holographic principle, which was later realized concretely in gauge-gravity duality. In this context, entanglement entropy is given by the area of a minimal surface in a dual spacetime. However, discussions of area laws have been constrained to entanglement entropy, whereas a full understanding of a quantum state requires Rényi entropies. Here we show that all Rényi entropies satisfy a similar area law in holographic theories and are given by the areas of dual cosmic branes. This geometric prescription is a one-parameter generalization of the minimal surface prescription for entanglement entropy. Applying this we provide the first holographic calculation of mutual Rényi information between two disks of arbitrary dimension. Our results provide a framework for efficiently studying Rényi entropies and understanding entanglement structures in strongly coupled systems and quantum gravity. PMID:27515122

  1. A new local-global approach for classification.

    PubMed

    Peres, R T; Pedreira, C E

    2010-09-01

    In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. Spectral Entropy Can Predict Changes of Working Memory Performance Reduced by Short-Time Training in the Delayed-Match-to-Sample Task

    PubMed Central

    Tian, Yin; Zhang, Huiling; Xu, Wei; Zhang, Haiyong; Yang, Li; Zheng, Shuxing; Shi, Yupan

    2017-01-01

    Spectral entropy, which was generated by applying the Shannon entropy concept to the power distribution of the Fourier-transformed electroencephalograph (EEG), was utilized to measure the uniformity of power spectral density underlying EEG when subjects performed the working memory tasks twice, i.e., before and after training. According to Signed Residual Time (SRT) scores based on response speed and accuracy trade-off, 20 subjects were divided into two groups, namely high-performance and low-performance groups, to undertake working memory (WM) tasks. We found that spectral entropy derived from the retention period of WM on channel FC4 exhibited a high correlation with SRT scores. To this end, spectral entropy was used in support vector machine classifier with linear kernel to differentiate these two groups. Receiver operating characteristics analysis and leave-one out cross-validation (LOOCV) demonstrated that the averaged classification accuracy (CA) was 90.0 and 92.5% for intra-session and inter-session, respectively, indicating that spectral entropy could be used to distinguish these two different WM performance groups successfully. Furthermore, the support vector regression prediction model with radial basis function kernel and the root-mean-square error of prediction revealed that spectral entropy could be utilized to predict SRT scores on individual WM performance. After testing the changes in SRT scores and spectral entropy for each subject by short-time training, we found that 16 in 20 subjects’ SRT scores were clearly promoted after training and 15 in 20 subjects’ SRT scores showed consistent changes with spectral entropy before and after training. The findings revealed that spectral entropy could be a promising indicator to predict individual’s WM changes by training and further provide a novel application about WM for brain–computer interfaces. PMID:28912701

  3. Quantization of high dimensional Gaussian vector using permutation modulation with application to information reconciliation in continuous variable QKD

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar

    This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.

  4. Five dimensional microstate geometries

    NASA Astrophysics Data System (ADS)

    Wang, Chih-Wei

    In this thesis, we discuss the possibility of exploring the statistical mechanics description of a black hole from the point view of supergravity. Specifically, we study five dimensional microstate geometries of a black hole or black ring. At first, we review the method to find the general three-charge BPS supergravity solutions proposed by Bena and Warner. By applying this method, we show the classical merger of a black ring and black hole on [Special characters omitted.] base space in general are irreversible. On the other hand, we review the solutions on ambi-polar Gibbons-Hawking (GH) base which are bubbled geometries. There are many possible microstate geometries among the bubbled geometries. Particularly, we show that a generic blob of GH points that satisfy certain conditions can be either microstate geometry of a black hole or black ring without horizon. Furthermore, using the result of the entropy analysis in classical merger as a guide, we show that one can have a merger of a black-hole blob and a black-ring blob or two black-ring blobs that corresponds to a classical irreversible merger. From the irreversible mergers, we find the scaling solutions and deep microstates which are microstate geometries of a black hole/ring with macroscopic horizon. These solutions have the same AdS throats as classical black holes/rings but instead of having infinite throats, the throat is smoothly capped off at a very large depth with some local structure at the bottom. For solutions that produced from U (1) × U (1) invariant merger, the depth of the throat is limited by flux quantization. The mass gap is related with the depth of this throat and we show the mass gap of these solutions roughly match with the mass gap of the typical conformal-field-theory (CFT) states. Therefore, based on AdS/CFT correspondence, they can be dual geometries of the typical CFT states that contribute to the entropy of a black hole/ring. On the other hand, we show that for the solutions produced from more general merger (without U (1) × U (1) invariance), the throat can be arbitrarily deep. This presents a puzzle from the point view of AdS/CFT correspondence. We propose that this puzzle may be solved by some quantization of the angle or promoting the flux vectors to quantum spins. Finally, we suggest some future directions of further study including the puzzle of arbitrary long AdS throat and a general coarse-graining picture of microstate geometries.

  5. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  6. Studies on image compression and image reconstruction

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Nori, Sekhar; Araj, A.

    1994-01-01

    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  7. Optical systolic array processor using residue arithmetic

    NASA Technical Reports Server (NTRS)

    Jackson, J.; Casasent, D.

    1983-01-01

    The use of residue arithmetic to increase the accuracy and reduce the dynamic range requirements of optical matrix-vector processors is evaluated. It is determined that matrix-vector operations and iterative algorithms can be performed totally in residue notation. A new parallel residue quantizer circuit is developed which significantly improves the performance of the systolic array feedback processor. Results are presented of a computer simulation of this system used to solve a set of three simultaneous equations.

  8. Classification and Compression of Multi-Resolution Vectors: A Tree Structured Vector Quantizer Approach

    DTIC Science & Technology

    2002-01-01

    their expression profile and for classification of cells into tumerous and non- tumerous classes. Then we will present a parallel tree method for... cancerous cells. We will use the same dataset and use tree structured classifiers with multi-resolution analysis for classifying cancerous from non- cancerous ...cells. We have the expressions of 4096 genes from 98 different cell types. Of these 98, 72 are cancerous while 26 are non- cancerous . We are interested

  9. Image segmentation using hidden Markov Gauss mixture models.

    PubMed

    Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M

    2007-07-01

    Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.

  10. Multi-mode energy management strategy for fuel cell electric vehicles based on driving pattern identification using learning vector quantization neural network algorithm

    NASA Astrophysics Data System (ADS)

    Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong

    2018-06-01

    The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.

  11. FIVQ algorithm for interference hyper-spectral image compression

    NASA Astrophysics Data System (ADS)

    Wen, Jia; Ma, Caiwen; Zhao, Junsuo

    2014-07-01

    Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.

  12. Quantization, Frobenius and Bi algebras from the Categorical Framework of Quantum Mechanics to Natural Language Semantics

    NASA Astrophysics Data System (ADS)

    Sadrzadeh, Mehrnoosh

    2017-07-01

    Compact Closed categories and Frobenius and Bi algebras have been applied to model and reason about Quantum protocols. The same constructions have also been applied to reason about natural language semantics under the name: ``categorical distributional compositional'' semantics, or in short, the ``DisCoCat'' model. This model combines the statistical vector models of word meaning with the compositional models of grammatical structure. It has been applied to natural language tasks such as disambiguation, paraphrasing and entailment of phrases and sentences. The passage from the grammatical structure to vectors is provided by a functor, similar to the Quantization functor of Quantum Field Theory. The original DisCoCat model only used compact closed categories. Later, Frobenius algebras were added to it to model long distance dependancies such as relative pronouns. Recently, bialgebras have been added to the pack to reason about quantifiers. This paper reviews these constructions and their application to natural language semantics. We go over the theory and present some of the core experimental results.

  13. Learning vector quantization neural networks improve accuracy of transcranial color-coded duplex sonography in detection of middle cerebral artery spasm--preliminary report.

    PubMed

    Swiercz, Miroslaw; Kochanowicz, Jan; Weigele, John; Hurst, Robert; Liebeskind, David S; Mariak, Zenon; Melhem, Elias R; Krejza, Jaroslaw

    2008-01-01

    To determine the performance of an artificial neural network in transcranial color-coded duplex sonography (TCCS) diagnosis of middle cerebral artery (MCA) spasm. TCCS was prospectively acquired within 2 h prior to routine cerebral angiography in 100 consecutive patients (54M:46F, median age 50 years). Angiographic MCA vasospasm was classified as mild (<25% of vessel caliber reduction), moderate (25-50%), or severe (>50%). A Learning Vector Quantization neural network classified MCA spasm based on TCCS peak-systolic, mean, and end-diastolic velocity data. During a four-class discrimination task, accurate classification by the network ranged from 64.9% to 72.3%, depending on the number of neurons in the Kohonen layer. Accurate classification of vasospasm ranged from 79.6% to 87.6%, with an accuracy of 84.7% to 92.1% for the detection of moderate-to-severe vasospasm. An artificial neural network may increase the accuracy of TCCS in diagnosis of MCA spasm.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Xi

    A remarkable yet mysterious property of black holes is that their entropy is proportional to the horizon area. This area law inspired the holographic principle, which was later realized concretely in gauge-gravity duality. In this context, entanglement entropy is given by the area of a minimal surface in a dual spacetime. However, discussions of area laws have been constrained to entanglement entropy, whereas a full understanding of a quantum state requires Re´nyi entropies. Here we show that all Rényi entropies satisfy a similar area law in holographic theories and are given by the areas of dual cosmic branes. This geometricmore » prescription is a one-parameter generalization of the minimal surface prescription for entanglement entropy. Applying this we provide the first holographic calculation of mutual Re´nyi information between two disks of arbitrary dimension. Our results provide a framework for efficiently studying Re´nyi entropies and understanding entanglement structures in strongly coupled systems and quantum gravity.« less

  15. Sea ice motion from low-resolution satellite sensors: An alternative method and its validation in the Arctic

    NASA Astrophysics Data System (ADS)

    Lavergne, T.; Eastwood, S.; Teffah, Z.; Schyberg, H.; Breivik, L.-A.

    2010-10-01

    The retrieval of sea ice motion with the Maximum Cross-Correlation (MCC) method from low-resolution (10-15 km) spaceborne imaging sensors is challenged by a dominating quantization noise as the time span of displacement vectors is shortened. To allow investigating shorter displacements from these instruments, we introduce an alternative sea ice motion tracking algorithm that builds on the MCC method but relies on a continuous optimization step for computing the motion vector. The prime effect of this method is to effectively dampen the quantization noise, an artifact of the MCC. It allows for retrieving spatially smooth 48 h sea ice motion vector fields in the Arctic. Strategies to detect and correct erroneous vectors as well as to optimally merge several polarization channels of a given instrument are also described. A test processing chain is implemented and run with several active and passive microwave imagers (Advanced Microwave Scanning Radiometer-EOS (AMSR-E), Special Sensor Microwave Imager, and Advanced Scatterometer) during three Arctic autumn, winter, and spring seasons. Ice motion vectors are collocated to and compared with GPS positions of in situ drifters. Error statistics are shown to be ranging from 2.5 to 4.5 km (standard deviation for components of the vectors) depending on the sensor, without significant bias. We discuss the relative contribution of measurement and representativeness errors by analyzing monthly validation statistics. The 37 GHz channels of the AMSR-E instrument allow for the best validation statistics. The operational low-resolution sea ice drift product of the EUMETSAT OSI SAF (European Organisation for the Exploitation of Meteorological Satellites Ocean and Sea Ice Satellite Application Facility) is based on the algorithms presented in this paper.

  16. Group prioritisation with unknown expert weights in incomplete linguistic context

    NASA Astrophysics Data System (ADS)

    Cheng, Dong; Cheng, Faxin; Zhou, Zhili; Wang, Juan

    2017-09-01

    In this paper, we study a group prioritisation problem in situations when the expert weights are completely unknown and their judgement preferences are linguistic and incomplete. Starting from the theory of relative entropy (RE) and multiplicative consistency, an optimisation model is provided for deriving an individual priority vector without estimating the missing value(s) of an incomplete linguistic preference relation. In order to address the unknown expert weights in the group aggregating process, we define two new kinds of expert weight indicators based on RE: proximity entropy weight and similarity entropy weight. Furthermore, a dynamic-adjusting algorithm (DAA) is proposed to obtain an objective expert weight vector and capture the dynamic properties involved in it. Unlike the extant literature of group prioritisation, the proposed RE approach does not require pre-allocation of expert weights and can solve incomplete preference relations. An interesting finding is that once all the experts express their preference relations, the final expert weight vector derived from the DAA is fixed irrespective of the initial settings of expert weights. Finally, an application example is conducted to validate the effectiveness and robustness of the RE approach.

  17. Motions and entropies in proteins as seen in NMR relaxation experiments and molecular dynamics simulations.

    PubMed

    Allnér, Olof; Foloppe, Nicolas; Nilsson, Lennart

    2015-01-22

    Molecular dynamics simulations of E. coli glutaredoxin1 in water have been performed to relate the dynamical parameters and entropy obtained in NMR relaxation experiments, with results extracted from simulated trajectory data. NMR relaxation is the most widely used experimental method to obtain data on dynamics of proteins, but it is limited to relatively short timescales and to motions of backbone amides or in some cases (13)C-H vectors. By relating the experimental data to the all-atom picture obtained in molecular dynamics simulations, valuable insights on the interpretation of the experiment can be gained. We have estimated the internal dynamics and their timescales by calculating the generalized order parameters (O) for different time windows. We then calculate the quasiharmonic entropy (S) and compare it to the entropy calculated from the NMR-derived generalized order parameter of the amide vectors. Special emphasis is put on characterizing dynamics that are not expressed through the motions of the amide group. The NMR and MD methods suffer from complementary limitations, with NMR being restricted to local vectors and dynamics on a timescale determined by the rotational diffusion of the solute, while in simulations, it may be difficult to obtain sufficient sampling to ensure convergence of the results. We also evaluate the amount of sampling obtained with molecular dynamics simulations and how it is affected by the length of individual simulations, by clustering of the sampled conformations. We find that two structural turns act as hinges, allowing the α helix between them to undergo large, long timescale motions that cannot be detected in the time window of the NMR dipolar relaxation experiments. We also show that the entropy obtained from the amide vector does not account for correlated motions of adjacent residues. Finally, we show that the sampling in a total of 100 ns molecular dynamics simulation can be increased by around 50%, by dividing the trajectory into 10 replicas with different starting velocities.

  18. The gravity dual of Rényi entropy

    DOE PAGES

    Dong, Xi

    2016-08-12

    A remarkable yet mysterious property of black holes is that their entropy is proportional to the horizon area. This area law inspired the holographic principle, which was later realized concretely in gauge-gravity duality. In this context, entanglement entropy is given by the area of a minimal surface in a dual spacetime. However, discussions of area laws have been constrained to entanglement entropy, whereas a full understanding of a quantum state requires Re´nyi entropies. Here we show that all Rényi entropies satisfy a similar area law in holographic theories and are given by the areas of dual cosmic branes. This geometricmore » prescription is a one-parameter generalization of the minimal surface prescription for entanglement entropy. Applying this we provide the first holographic calculation of mutual Re´nyi information between two disks of arbitrary dimension. Our results provide a framework for efficiently studying Re´nyi entropies and understanding entanglement structures in strongly coupled systems and quantum gravity.« less

  19. Vector adaptive predictive coder for speech and audio

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey (Inventor); Gersho, Allen (Inventor)

    1990-01-01

    A real-time vector adaptive predictive coder which approximates each vector of K speech samples by using each of M fixed vectors in a first codebook to excite a time-varying synthesis filter and picking the vector that minimizes distortion. Predictive analysis for each frame determines parameters used for computing from vectors in the first codebook zero-state response vectors that are stored at the same address (index) in a second codebook. Encoding of input speech vectors s.sub.n is then carried out using the second codebook. When the vector that minimizes distortion is found, its index is transmitted to a decoder which has a codebook identical to the first codebook of the decoder. There the index is used to read out a vector that is used to synthesize an output speech vector s.sub.n. The parameters used in the encoder are quantized, for example by using a table, and the indices are transmitted to the decoder where they are decoded to specify transfer characteristics of filters used in producing the vector s.sub.n from the receiver codebook vector selected by the vector index transmitted.

  20. Consistent description of kinetic equation with triangle anomaly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pu Shi; Gao Jianhua; Wang Qun

    2011-05-01

    We provide a consistent description of the kinetic equation with a triangle anomaly which is compatible with the entropy principle of the second law of thermodynamics and the charge/energy-momentum conservation equations. In general an anomalous source term is necessary to ensure that the equations for the charge and energy-momentum conservation are satisfied and that the correction terms of distribution functions are compatible to these equations. The constraining equations from the entropy principle are derived for the anomaly-induced leading order corrections to the particle distribution functions. The correction terms can be determined for the minimum number of unknown coefficients in onemore » charge and two charge cases by solving the constraining equations.« less

  1. Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen

    2018-07-01

    Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).

  2. Physics-based Detection of Subpixel Targets in Hyperspectral Imagery

    DTIC Science & Technology

    2007-01-01

    Learning Vector Quantization LWIR ...Wave Infrared ( LWIR ) from 7.0 to 15.0 microns regions as well. At these wavelengths, emissivity dominates the spectral signature. Emissivity is...object emits instead of reflects. Initial work has already been finished applying the hybrid detectors to LWIR sensors [13]. However, target

  3. Application of SNODAS and hydrologic models to enhance entropy-based snow monitoring network design

    NASA Astrophysics Data System (ADS)

    Keum, Jongho; Coulibaly, Paulin; Razavi, Tara; Tapsoba, Dominique; Gobena, Adam; Weber, Frank; Pietroniro, Alain

    2018-06-01

    Snow has a unique characteristic in the water cycle, that is, snow falls during the entire winter season, but the discharge from snowmelt is typically delayed until the melting period and occurs in a relatively short period. Therefore, reliable observations from an optimal snow monitoring network are necessary for an efficient management of snowmelt water for flood prevention and hydropower generation. The Dual Entropy and Multiobjective Optimization is applied to design snow monitoring networks in La Grande River Basin in Québec and Columbia River Basin in British Columbia. While the networks are optimized to have the maximum amount of information with minimum redundancy based on entropy concepts, this study extends the traditional entropy applications to the hydrometric network design by introducing several improvements. First, several data quantization cases and their effects on the snow network design problems were explored. Second, the applicability the Snow Data Assimilation System (SNODAS) products as synthetic datasets of potential stations was demonstrated in the design of the snow monitoring network of the Columbia River Basin. Third, beyond finding the Pareto-optimal networks from the entropy with multi-objective optimization, the networks obtained for La Grande River Basin were further evaluated by applying three hydrologic models. The calibrated hydrologic models simulated discharges using the updated snow water equivalent data from the Pareto-optimal networks. Then, the model performances for high flows were compared to determine the best optimal network for enhanced spring runoff forecasting.

  4. X-ray and simulation studies of water

    NASA Astrophysics Data System (ADS)

    Nilsson, A.; Schlesinger, D.; G. M. Pettersson, L.

    Here we present a picture that combines discussions regarding the thermodynamic anomalies in ambient and supercooled water with recent interpretations of X-ray spectroscopy and scattering data of water. At ambient temperatures most molecules favor a closer packing than tetrahedral, with strongly distorted hydrogen bonds, which allows the quantized librational modes to be excited and contribute to the entropy, but with enthalpically favored tetrahedrally bonded water patches appearing as fluctuations, a competition between entropy and enthalpy. Upon cooling water the amount of molecules participating in tetrahedral structures and the size of the tetrahedral patches increase. The two local structures are connected to the liquid-liquid critical point hypothesis in supercooled water corresponding to high-density liquid (HDL) and low-density liquid (LDL). We demonstrate that the HDL local structure deviates from a tetrahedral coordination not only through a collapse of the 2nd shell but also through severe distortions around the 1st coordination shell.

  5. Multipath search coding of stationary signals with applications to speech

    NASA Astrophysics Data System (ADS)

    Fehn, H. G.; Noll, P.

    1982-04-01

    This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.

  6. Vacuum polarization of the quantized massive fields in Friedman-Robertson-Walker spacetime

    NASA Astrophysics Data System (ADS)

    Matyjasek, Jerzy; Sadurski, Paweł; Telecka, Małgorzata

    2014-04-01

    The stress-energy tensor of the quantized massive fields in a spatially open, flat, and closed Friedman-Robertson-Walker universe is constructed using the adiabatic regularization (for the scalar field) and the Schwinger-DeWitt approach (for the scalar, spinor, and vector fields). It is shown that the stress-energy tensor calculated in the sixth adiabatic order coincides with the result obtained from the regularized effective action, constructed from the heat kernel coefficient a3. The behavior of the tensor is examined in the power-law cosmological models, and the semiclassical Einstein field equations are solved exactly in a few physically interesting cases, such as the generalized Starobinsky models.

  7. The covariance matrix for the solution vector of an equality-constrained least-squares problem

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1976-01-01

    Methods are given for computing the covariance matrix for the solution vector of an equality-constrained least squares problem. The methods are matched to the solution algorithms given in the book, 'Solving Least Squares Problems.'

  8. Maximum entropy modeling of metabolic networks by constraining growth-rate moments predicts coexistence of phenotypes

    NASA Astrophysics Data System (ADS)

    De Martino, Daniele

    2017-12-01

    In this work maximum entropy distributions in the space of steady states of metabolic networks are considered upon constraining the first and second moments of the growth rate. Coexistence of fast and slow phenotypes, with bimodal flux distributions, emerges upon considering control on the average growth (optimization) and its fluctuations (heterogeneity). This is applied to the carbon catabolic core of Escherichia coli where it quantifies the metabolic activity of slow growing phenotypes and it provides a quantitative map with metabolic fluxes, opening the possibility to detect coexistence from flux data. A preliminary analysis on data for E. coli cultures in standard conditions shows degeneracy for the inferred parameters that extend in the coexistence region.

  9. Comparison of Radio Frequency Distinct Native Attribute and Matched Filtering Techniques for Device Discrimination and Operation Identification

    DTIC Science & Technology

    identification. URE from ten MSP430F5529 16-bit microcontrollers were analyzed using: 1) RF distinct native attributes (RF-DNA) fingerprints paired with multiple...discriminant analysis/maximum likelihood (MDA/ML) classification, 2) RF-DNA fingerprints paired with generalized relevance learning vector quantized

  10. Image segmentation using fuzzy LVQ clustering networks

    NASA Technical Reports Server (NTRS)

    Tsao, Eric Chen-Kuo; Bezdek, James C.; Pal, Nikhil R.

    1992-01-01

    In this note we formulate image segmentation as a clustering problem. Feature vectors extracted from a raw image are clustered into subregions, thereby segmenting the image. A fuzzy generalization of a Kohonen learning vector quantization (LVQ) which integrates the Fuzzy c-Means (FCM) model with the learning rate and updating strategies of the LVQ is used for this task. This network, which segments images in an unsupervised manner, is thus related to the FCM optimization problem. Numerical examples on photographic and magnetic resonance images are given to illustrate this approach to image segmentation.

  11. Transfer Entropy as a Log-Likelihood Ratio

    NASA Astrophysics Data System (ADS)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  12. Transfer entropy as a log-likelihood ratio.

    PubMed

    Barnett, Lionel; Bossomaier, Terry

    2012-09-28

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  13. Canonical quantization of constrained systems and coadjoint orbits of Diff(S sup 1 )

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scherer, W.M.

    It is shown that Dirac's treatment of constrained Hamiltonian systems and Schwinger's action principle quantization lead to identical commutations relations. An explicit relation between the Lagrange multipliers in the action principle approach and the additional terms in the Dirac bracket is derived. The equivalence of the two methods is demonstrated in the case of the non-linear sigma model. Dirac's method is extended to superspace and this extension is applied to the chiral superfield. The Dirac brackets of the massive interacting chiral superfluid are derived and shown to give the correct commutation relations for the component fields. The Hamiltonian of themore » theory is given and the Hamiltonian equations of motion are computed. They agree with the component field results. An infinite sequence of differential operators which are covariant under the coadjoint action of Diff(S{sup 1}) and analogues to Hill's operator is constructed. They map conformal fields of negative integer and half-integer weight to their dual space. Some properties of these operators are derived and possible applications are discussed. The Korteweg-de Vries equation is formulated as a coadjoint orbit of Diff(S{sup 1}).« less

  14. Data compression experiments with LANDSAT thematic mapper and Nimbus-7 coastal zone color scanner data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Ramapriyan, H. K.

    1989-01-01

    A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.

  15. Particle definitions and the information loss paradox

    NASA Astrophysics Data System (ADS)

    Venditti, Alex

    An investigation of information loss in black hole spacetimes is performed. We demonstrate that the definition of particles as energy levels of the Harmonic oscillator will not have physical significance in general and is thus not a good instrument to study the radiation of black holes. This is due to the ambiguity of the choice of coordinates on the phase space of the quantum field. We demonstrate how to identify quantum states in the functional Schrodinger picture. We demonstrate that information is truly lost in the case of a Vaidya black hole (a black hole formed from null dust) if we neglect back reaction. This is done by quantizing the constrained classical system of a Klein-Gordon field in a Vaidya background. The interaction picture of quantum mechanics can be applied to this system. We find a physically well motivated vacuum state for a spherically symmetric spacetime with an extra conformal Killing vector. We also demonstrate how to calculate the response of a particle detector in the a LeMaitre-Tolman-Bondi spacetime with a selfsimilarity. Finally, some of the claims and confusion surrounding Unruh radiation, Hawking radiation and the equivalence principle are investigated and shown to be false.

  16. BFV-BRST quantization of two-dimensional supergravity

    NASA Astrophysics Data System (ADS)

    Fujiwara, T.; Igarashi, Y.; Kuriki, R.; Tabei, T.

    1996-01-01

    Two-dimensional supergravity theory is quantized as an anomalous gauge theory. In the Batalin-Fradkin (BF) formalism, the anomaly-canceling super-Liouville fields are introduced to identify the original second-class constrained system with a gauge-fixed version of a first-class system. The BFV-BRST quantization applies to formulate the theory in the most general class of gauges. A local effective action constructed in the configuration space contains two super-Liouville actions; one is a noncovariant but local functional written only in terms of two-dimensional supergravity fields, and the other contains the super-Liouville fields canceling the super-Weyl anomaly. Auxiliary fields for the Liouville and the gravity supermultiplets are introduced to make the BRST algebra close off-shell. Inclusion of them turns out to be essentially important especially in the super-light-cone gauge fixing, where the supercurvature equations (∂3-g++=∂2-χ++=0) are obtained as a result of BRST invariance of the theory. Our approach reveals the origin of the OSp(1,2) current algebra symmetry in a transparent manner.

  17. Entropy and temperature from black-hole/near-horizon-CFT duality

    NASA Astrophysics Data System (ADS)

    Rodriguez, Leo; Yildirim, Tuna

    2010-08-01

    We construct a two-dimensional CFT, in the form of a Liouville theory, in the near-horizon limit of four- and three-dimensional black holes. The near-horizon CFT assumes two-dimensional black hole solutions first introduced by Christensen and Fulling (1977 Phys. Rev. D 15 2088-104) and expanded to a greater class of black holes via Robinson and Wilczek (2005 Phys. Rev. Lett. 95 011303). The two-dimensional black holes admit a Diff(S1) subalgebra, which upon quantization in the horizon limit becomes Virasoro with calculable central charge. This charge and the lowest Virasoro eigen-mode reproduce the correct Bekenstein-Hawking entropy of the four- and three-dimensional black holes via the known Cardy formula (Blöte et al 1986 Phys. Rev. Lett. 56 742; Cardy 1986 Nucl. Phys. B 270 186). Furthermore, the two-dimensional CFT's energy-momentum tensor is anomalous. However, in the horizon limit the energy-momentum tensor becomes holomorphic equaling the Hawking flux of the four- and three-dimensional black holes. This encoding of both entropy and temperature provides a uniformity in the calculation of black hole thermodynamic and statistical quantities for the non-local effective action approach.

  18. Reinterpreting maximum entropy in ecology: a null hypothesis constrained by ecological mechanism.

    PubMed

    O'Dwyer, James P; Rominger, Andrew; Xiao, Xiao

    2017-07-01

    Simplified mechanistic models in ecology have been criticised for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process. In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness. But an outstanding question remains: what principle tells us which state variables to constrain? Here we attempt to solve both problems simultaneously, by translating a given set of mechanisms into the state variables to be used in MaxEnt, and then using this MaxEnt theory as a null model against which to compare mechanistic predictions. In particular, we identify the sufficient statistics needed to parametrise a given mechanistic model from data and use them as MaxEnt constraints. Our approach isolates exactly what mechanism is telling us over and above the state variables alone. © 2017 John Wiley & Sons Ltd/CNRS.

  19. Thermopower and the Fractional Quantized Hall Effect in the N=1 Landau Level

    NASA Astrophysics Data System (ADS)

    Chickering, W. E.; Eisenstein, J. P.; Pfeiffer, L. N.; West, K. W.

    2012-02-01

    Having recently eliminated an issue involving long thermal time constants [1], we are now able to resolve diffusion thermopower deep into the fractional quantized Hall effect (FQHE) regime. In this talk we report measurements of thermopower in the first excited (N=1) Landau level as a continuous function of magnetic field down to temperatures as low as 30mK. Above 50mK we can clearly resolve the ν = 5/2 as well as ν = 7/3, 8/3, and 14/5 FQHEs in both the electrical and thermoelectrical transport. Below 50mK a prominent feature of the electrical transport in the first excited Landau level is the Re-entrant Integer Quantized Hall Effect (RIQHE) which is associated with insulating collective phases [2]. In this temperature regime the thermopower exhibits a series of intriguing sign reversals that are as yet not fully understood. We will conclude with a brief discussion of the connection between thermopower and the entropy of the 2D electron system. This connection is invoked by a recent prediction [3] of the thermopower at ν = 5/2, which assumes the ground state is the non-Abelian Moore-Read paired composite fermion state.[4pt] [1] Chickering, Phys. Rev. B 81, 245319 (2010)[0pt] [2] Eisenstein, Phys. Rev. Lett. 88, 076801 (2002)[0pt] [3] Yang, Phys. Rev. B 79, 115317 (2009)

  20. Constraining the loop quantum gravity parameter space from phenomenology

    NASA Astrophysics Data System (ADS)

    Brahma, Suddhasattwa; Ronco, Michele

    2018-03-01

    Development of quantum gravity theories rarely takes inputs from experimental physics. In this letter, we take a small step towards correcting this by establishing a paradigm for incorporating putative quantum corrections, arising from canonical quantum gravity (QG) theories, in deriving falsifiable modified dispersion relations (MDRs) for particles on a deformed Minkowski space-time. This allows us to differentiate and, hopefully, pick between several quantization choices via testable, state-of-the-art phenomenological predictions. Although a few explicit examples from loop quantum gravity (LQG) (such as the regularization scheme used or the representation of the gauge group) are shown here to establish the claim, our framework is more general and is capable of addressing other quantization ambiguities within LQG and also those arising from other similar QG approaches.

  1. Supporting Dynamic Quantization for High-Dimensional Data Analytics.

    PubMed

    Guzun, Gheorghi; Canahuate, Guadalupe

    2017-05-01

    Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.

  2. Spin dynamics of paramagnetic centers with anisotropic g tensor and spin of ½

    PubMed Central

    Maryasov, Alexander G.

    2012-01-01

    The influence of g tensor anisotropy on spin dynamics of paramagnetic centers having real or effective spin of 1/2 is studied. The g anisotropy affects both the excitation and the detection of EPR signals, producing noticeable differences between conventional continuous-wave (cw) EPR and pulsed EPR spectra. The magnitudes and directions of the spin and magnetic moment vectors are generally not proportional to each other, but are related to each other through the g tensor. The equilibrium magnetic moment direction is generally parallel to neither the magnetic field nor the spin quantization axis due to the g anisotropy. After excitation with short microwave pulses, the spin vector precesses around its quantization axis, in a plane that is generally not perpendicular to the applied magnetic field. Paradoxically, the magnetic moment vector precesses around its equilibrium direction in a plane exactly perpendicular to the external magnetic field. In the general case, the oscillating part of the magnetic moment is elliptically polarized and the direction of precession is determined by the sign of the g tensor determinant (g tensor signature). Conventional pulsed and cw EPR spectrometers do not allow determination of the g tensor signature or the ellipticity of the magnetic moment trajectory. It is generally impossible to set a uniform spin turning angle for simple pulses in an unoriented or ‘powder’ sample when g tensor anisotropy is significant. PMID:22743542

  3. Spin dynamics of paramagnetic centers with anisotropic g tensor and spin of 1/2

    NASA Astrophysics Data System (ADS)

    Maryasov, Alexander G.; Bowman, Michael K.

    2012-08-01

    The influence of g tensor anisotropy on spin dynamics of paramagnetic centers having real or effective spin of 1/2 is studied. The g anisotropy affects both the excitation and the detection of EPR signals, producing noticeable differences between conventional continuous-wave (cw) EPR and pulsed EPR spectra. The magnitudes and directions of the spin and magnetic moment vectors are generally not proportional to each other, but are related to each other through the g tensor. The equilibrium magnetic moment direction is generally parallel to neither the magnetic field nor the spin quantization axis due to the g anisotropy. After excitation with short microwave pulses, the spin vector precesses around its quantization axis, in a plane that is generally not perpendicular to the applied magnetic field. Paradoxically, the magnetic moment vector precesses around its equilibrium direction in a plane exactly perpendicular to the external magnetic field. In the general case, the oscillating part of the magnetic moment is elliptically polarized and the direction of precession is determined by the sign of the g tensor determinant (g tensor signature). Conventional pulsed and cw EPR spectrometers do not allow determination of the g tensor signature or the ellipticity of the magnetic moment trajectory. It is generally impossible to set a uniform spin turning angle for simple pulses in an unoriented or 'powder' sample when g tensor anisotropy is significant.

  4. Classification of Partial Discharge Signals by Combining Adaptive Local Iterative Filtering and Entropy Features

    PubMed Central

    Morison, Gordon; Boreham, Philip

    2018-01-01

    Electromagnetic Interference (EMI) is a technique for capturing Partial Discharge (PD) signals in High-Voltage (HV) power plant apparatus. EMI signals can be non-stationary which makes their analysis difficult, particularly for pattern recognition applications. This paper elaborates upon a previously developed software condition-monitoring model for improved EMI events classification based on time-frequency signal decomposition and entropy features. The idea of the proposed method is to map multiple discharge source signals captured by EMI and labelled by experts, including PD, from the time domain to a feature space, which aids in the interpretation of subsequent fault information. Here, instead of using only one permutation entropy measure, a more robust measure, called Dispersion Entropy (DE), is added to the feature vector. Multi-Class Support Vector Machine (MCSVM) methods are utilized for classification of the different discharge sources. Results show an improved classification accuracy compared to previously proposed methods. This yields to a successful development of an expert’s knowledge-based intelligent system. Since this method is demonstrated to be successful with real field data, it brings the benefit of possible real-world application for EMI condition monitoring. PMID:29385030

  5. Collective coordinates and constrained hamiltonian systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dayi, O.F.

    1992-07-01

    A general method of incorporating collective coordinates (transformation of fields into an overcomplete basis) with constrained hamiltonian systems is given where the original phase space variables and collective coordinates can be bosonic or/and fermionic. This method is illustrated by applying it to the SU(2) Yang-Mills-Higgs theory and its BFV-BRST quantization is discussed. Moreover, this formalism is used to give a systematic way of converting second class constraints into effectively first class ones, by considering second class constraints as first class constraints and gauge fixing conditions. This approach is applied to the massive superparticle. Proca lagrangian, and some topological quantum fieldmore » theories.« less

  6. Detection of Dendritic Spines Using Wavelet Packet Entropy and Fuzzy Support Vector Machine.

    PubMed

    Wang, Shuihua; Li, Yang; Shao, Ying; Cattani, Carlo; Zhang, Yudong; Du, Sidan

    2017-01-01

    The morphology of dendritic spines is highly correlated with the neuron function. Therefore, it is of positive influence for the research of the dendritic spines. However, it is tried to manually label the spine types for statistical analysis. In this work, we proposed an approach based on the combination of wavelet contour analysis for the backbone detection, wavelet packet entropy, and fuzzy support vector machine for the spine classification. The experiments show that this approach is promising. The average detection accuracy of "MushRoom" achieves 97.3%, "Stubby" achieves 94.6%, and "Thin" achieves 97.2%. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  7. Wavelet Transforms in Parallel Image Processing

    DTIC Science & Technology

    1994-01-27

    NUMBER OF PAGES Object Segmentation, Texture Segmentation, Image Compression, Image 137 Halftoning , Neural Network, Parallel Algorithms, 2D and 3D...Vector Quantization of Wavelet Transform Coefficients ........ ............................. 57 B.1.f Adaptive Image Halftoning based on Wavelet...application has been directed to the adaptive image halftoning . The gray information at a pixel, including its gray value and gradient, is represented by

  8. Satellite classification and segmentation using non-additive entropy

    NASA Astrophysics Data System (ADS)

    Assirati, Lucas; Souto Martinez, Alexandre; Martinez Bruno, Odemir

    2014-03-01

    Here we compare the Boltzmann-Gibbs-Shannon (standard) with the Tsallis entropy on the pattern recognition and segmentation of colored images obtained by satellites, via "Google Earth". By segmentation we mean particionate an image to locate regions of interest. Here, we discriminate and define an image partition classes according to a training basis. This training basis consists of three pattern classes: aquatic, urban and vegetation regions. Our numerical experiments demonstrate that the Tsallis entropy, used as a feature vector composed of distinct entropic indexes q outperforms the standard entropy. There are several applications of our proposed methodology, once satellite images can be used to monitor migration form rural to urban regions, agricultural activities, oil spreading on the ocean etc.

  9. Thermodynamic constraints on a varying cosmological-constant-like term from the holographic equipartition law with a power-law corrected entropy

    NASA Astrophysics Data System (ADS)

    Komatsu, Nobuyoshi

    2017-11-01

    A power-law corrected entropy based on a quantum entanglement is considered to be a viable black-hole entropy. In this study, as an alternative to Bekenstein-Hawking entropy, a power-law corrected entropy is applied to Padmanabhan's holographic equipartition law to thermodynamically examine an extra driving term in the cosmological equations for a flat Friedmann-Robertson-Walker universe at late times. Deviations from the Bekenstein-Hawking entropy generate an extra driving term (proportional to the α th power of the Hubble parameter, where α is a dimensionless constant for the power-law correction) in the acceleration equation, which can be derived from the holographic equipartition law. Interestingly, the value of the extra driving term in the present model is constrained by the second law of thermodynamics. From the thermodynamic constraint, the order of the driving term is found to be consistent with the order of the cosmological constant measured by observations. In addition, the driving term tends to be constantlike when α is small, i.e., when the deviation from the Bekenstein-Hawking entropy is small.

  10. Wavelet Packet Entropy for Heart Murmurs Classification

    PubMed Central

    Safara, Fatemeh; Doraisamy, Shyamala; Azman, Azreen; Jantan, Azrul; Ranga, Sri

    2012-01-01

    Heart murmurs are the first signs of cardiac valve disorders. Several studies have been conducted in recent years to automatically differentiate normal heart sounds, from heart sounds with murmurs using various types of audio features. Entropy was successfully used as a feature to distinguish different heart sounds. In this paper, new entropy was introduced to analyze heart sounds and the feasibility of using this entropy in classification of five types of heart sounds and murmurs was shown. The entropy was previously introduced to analyze mammograms. Four common murmurs were considered including aortic regurgitation, mitral regurgitation, aortic stenosis, and mitral stenosis. Wavelet packet transform was employed for heart sound analysis, and the entropy was calculated for deriving feature vectors. Five types of classification were performed to evaluate the discriminatory power of the generated features. The best results were achieved by BayesNet with 96.94% accuracy. The promising results substantiate the effectiveness of the proposed wavelet packet entropy for heart sounds classification. PMID:23227043

  11. Hamiltonian and Thermodynamic Modeling of Quantum Turbulence

    NASA Astrophysics Data System (ADS)

    Grmela, Miroslav

    2010-10-01

    The state variables in the novel model introduced in this paper are the fields playing this role in the classical Landau-Tisza model and additional fields of mass, entropy (or temperature), superfluid velocity, and gradient of the superfluid velocity, all depending on the position vector and another tree dimensional vector labeling the scale, describing the small-scale structure developed in 4He superfluid experiencing turbulent motion. The fluxes of mass, momentum, energy, and entropy in the position space as well as the fluxes of energy and entropy in scales, appear in the time evolution equations as explicit functions of the state variables and of their conjugates. The fundamental thermodynamic relation relating the fields to their conjugates is left in this paper undetermined. The GENERIC structure of the equations serves two purposes: (i) it guarantees that solutions to the governing equations, independently of the choice of the fundamental thermodynamic relation, agree with the observed compatibility with thermodynamics, and (ii) it is used as a guide in the construction of the novel model.

  12. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE PAGES

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael; ...

    2016-10-14

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  13. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. This algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  14. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  15. Quaternionic Kähler Detour Complexes and {mathcal{N} = 2} Supersymmetric Black Holes

    NASA Astrophysics Data System (ADS)

    Cherney, D.; Latini, E.; Waldron, A.

    2011-03-01

    We study a class of supersymmetric spinning particle models derived from the radial quantization of stationary, spherically symmetric black holes of four dimensional {{mathcal N} = 2} supergravities. By virtue of the c-map, these spinning particles move in quaternionic Kähler manifolds. Their spinning degrees of freedom describe mini-superspace-reduced supergravity fermions. We quantize these models using BRST detour complex technology. The construction of a nilpotent BRST charge is achieved by using local (worldline) supersymmetry ghosts to generate special holonomy transformations. (An interesting byproduct of the construction is a novel Dirac operator on the superghost extended Hilbert space.) The resulting quantized models are gauge invariant field theories with fields equaling sections of special quaternionic vector bundles. They underly and generalize the quaternionic version of Dolbeault cohomology discovered by Baston. In fact, Baston’s complex is related to the BPS sector of the models we write down. Our results rely on a calculus of operators on quaternionic Kähler manifolds that follows from BRST machinery, and although directly motivated by black hole physics, can be broadly applied to any model relying on quaternionic geometry.

  16. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  17. Higher-n triangular dilatonic black holes

    NASA Astrophysics Data System (ADS)

    Zadora, Anton; Gal'tsov, Dmitri V.; Chen, Chiang-Mei

    2018-04-01

    Dilaton gravity with the form fields is known to possess dyon solutions with two horizons for the discrete "triangular" values of the dilaton coupling constant a =√{ n (n + 1) / 2 }. This sequence first obtained numerically and then explained analytically as consequence of the regularity of the dilaton, should have some higher-dimensional and/or group theoretical origin. Meanwhile, this origin was explained earlier only for n = 1 , 2 in which cases the solutions were known analytically. We extend this explanation to n = 3 , 5 presenting analytical triangular solutions for the theory with different dilaton couplings a , b in electric and magnetic sectors in which case the quantization condition reads ab = n (n + 1) / 2. The solutions are derived via the Toda chains for B2 and G2 Lie algebras. They are found in the closed form in general D space-time dimensions. Solutions satisfy the entropy product rules indicating on the microscopic origin of their entropy and have negative binding energy in the extremal case.

  18. Optimized nonorthogonal transforms for image compression.

    PubMed

    Guleryuz, O G; Orchard, M T

    1997-01-01

    The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.

  19. Maximum entropy PDF projection: A review

    NASA Astrophysics Data System (ADS)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  20. Parametric scaling from species relative abundances to absolute abundances in the computation of biological diversity: a first proposal using Shannon's entropy.

    PubMed

    Ricotta, Carlo

    2003-01-01

    Traditional diversity measures such as the Shannon entropy are generally computed from the species' relative abundance vector of a given community to the exclusion of species' absolute abundances. In this paper, I first mention some examples where the total information content associated with a given community may be more adequate than Shannon's average information content for a better understanding of ecosystem functioning. Next, I propose a parametric measure of statistical information that contains both Shannon's entropy and total information content as special cases of this more general function.

  1. Master equation for open two-band systems and its applications to Hall conductance

    NASA Astrophysics Data System (ADS)

    Shen, H. Z.; Zhang, S. S.; Dai, C. M.; Yi, X. X.

    2018-02-01

    Hall conductivity in the presence of a dephasing environment has recently been investigated with a dissipative term introduced phenomenologically. In this paper, we study the dissipative topological insulator (TI) and its topological transition in the presence of quantized electromagnetic environments. A Lindblad-type equation is derived to determine the dynamics of a two-band system. When the two-band model describes TIs, the environment may be the fluctuations of radiation that surround the TIs. We find the dependence of decay rates in the master equation on Bloch vectors in the two-band system, which leads to a mixing of the band occupations. Hence the environment-induced current is in general not perfectly topological in the presence of coupling to the environment, although deviations are small in the weak limit. As an illustration, we apply the Bloch-vector-dependent master equation to TIs and calculate the Hall conductance of tight-binding electrons in a two-dimensional lattice. The influence of environments on the Hall conductance is presented and discussed. The calculations show that the phase transition points of the TIs are robust against the quantized electromagnetic environment. The results might bridge the gap between quantum optics and topological photonic materials.

  2. Musical sound analysis/synthesis using vector-quantized time-varying spectra

    NASA Astrophysics Data System (ADS)

    Ehmann, Andreas F.; Beauchamp, James W.

    2002-11-01

    A fundamental goal of computer music sound synthesis is accurate, yet efficient resynthesis of musical sounds, with the possibility of extending the synthesis into new territories using control of perceptually intuitive parameters. A data clustering technique known as vector quantization (VQ) is used to extract a globally optimum set of representative spectra from phase vocoder analyses of instrument tones. This set of spectra, called a Codebook, is used for sinusoidal additive synthesis or, more efficiently, for wavetable synthesis. Instantaneous spectra are synthesized by first determining the Codebook indices corresponding to the best least-squares matches to the original time-varying spectrum. Spectral index versus time functions are then smoothed, and interpolation is employed to provide smooth transitions between Codebook spectra. Furthermore, spectral frames are pre-flattened and their slope, or tilt, extracted before clustering is applied. This allows spectral tilt, closely related to the perceptual parameter ''brightness,'' to be independently controlled during synthesis. The result is a highly compressed format consisting of the Codebook spectra and time-varying tilt, amplitude, and Codebook index parameters. This technique has been applied to a variety of harmonic musical instrument sounds with the resulting resynthesized tones providing good matches to the originals.

  3. Electron-electron interaction and spin-orbit coupling in InAs/AlSb heterostructures with a two-dimensional electron gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavrilenko, V. I.; Krishtopenko, S. S., E-mail: ds_a-teens@mail.ru; Goiran, M.

    2011-01-15

    The effect of electron-electron interaction on the spectrum of two-dimensional electron states in InAs/AlSb (001) heterostructures with a GaSb cap layer with one filled size-quantization subband. The energy spectrum of two-dimensional electrons is calculated in the Hartree and Hartree-Fock approximations. It is shown that the exchange interaction decreasing the electron energy in subbands increases the energy gap between subbands and the spin-orbit splitting of the spectrum in the entire region of electron concentrations, at which only the lower size-quantization band is filled. The nonlinear dependence of the Rashba splitting constant at the Fermi wave vector on the concentration of two-dimensionalmore » electrons is demonstrated.« less

  4. Feature Vector Construction Method for IRIS Recognition

    NASA Astrophysics Data System (ADS)

    Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.

    2017-05-01

    One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.

  5. BFV-BRST quantization of two-dimensional supergravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujiwara, T.; Igarashi, Y.; Kuriki, R.

    1996-01-01

    Two-dimensional supergravity theory is quantized as an anomalous gauge theory. In the Batalin-Fradkin (BF) formalism, the anomaly-canceling super-Liouville fields are introduced to identify the original second-class constrained system with a gauge-fixed version of a first-class system. The BFV-BRST quantization applies to formulate the theory in the most general class of gauges. A local effective action constructed in the configuration space contains two super-Liouville actions; one is a noncovariant but local functional written only in terms of two-dimensional supergravity fields, and the other contains the super-Liouville fields canceling the super-Weyl anomaly. Auxiliary fields for the Liouville and the gravity supermultiplets aremore » introduced to make the BRST algebra close off-shell. Inclusion of them turns out to be essentially important especially in the super-light-cone gauge fixing, where the supercurvature equations ({partial_derivative}{sup 3}{sub {minus}}{ital g}{sub +}{sub +}={partial_derivative}{sup 2}{sub {minus}}{chi}{sub +}{sub +}=0) are obtained as a result of BRST invariance of the theory. Our approach reveals the origin of the OSp(1,2) current algebra symmetry in a transparent manner. {copyright} {ital 1996 The American Physical Society.}« less

  6. Third law of thermodynamics as a key test of generalized entropies.

    PubMed

    Bento, E P; Viswanathan, G M; da Luz, M G E; Silva, R

    2015-02-01

    The laws of thermodynamics constrain the formulation of statistical mechanics at the microscopic level. The third law of thermodynamics states that the entropy must vanish at absolute zero temperature for systems with nondegenerate ground states in equilibrium. Conversely, the entropy can vanish only at absolute zero temperature. Here we ask whether or not generalized entropies satisfy this fundamental property. We propose a direct analytical procedure to test if a generalized entropy satisfies the third law, assuming only very general assumptions for the entropy S and energy U of an arbitrary N-level classical system. Mathematically, the method relies on exact calculation of β=dS/dU in terms of the microstate probabilities p(i). To illustrate this approach, we present exact results for the two best known generalizations of statistical mechanics. Specifically, we study the Kaniadakis entropy S(κ), which is additive, and the Tsallis entropy S(q), which is nonadditive. We show that the Kaniadakis entropy correctly satisfies the third law only for -1<κ<+1, thereby shedding light on why κ is conventionally restricted to this interval. Surprisingly, however, the Tsallis entropy violates the third law for q<1. Finally, we give a concrete example of the power of our proposed method by applying it to a paradigmatic system: the one-dimensional ferromagnetic Ising model with nearest-neighbor interactions.

  7. Simple modification of Oja rule limits L1-norm of weight vector and leads to sparse connectivity.

    PubMed

    Aparin, Vladimir

    2012-03-01

    This letter describes a simple modification of the Oja learning rule, which asymptotically constrains the L1-norm of an input weight vector instead of the L2-norm as in the original rule. This constraining is local as opposed to commonly used instant normalizations, which require the knowledge of all input weights of a neuron to update each one of them individually. The proposed rule converges to a weight vector that is sparser (has more zero weights) than the vector learned by the original Oja rule with or without the zero bound, which could explain the developmental synaptic pruning.

  8. Minimum relative entropy distributions with a large mean are Gaussian

    NASA Astrophysics Data System (ADS)

    Smerlak, Matteo

    2016-12-01

    Entropy optimization principles are versatile tools with wide-ranging applications from statistical physics to engineering to ecology. Here we consider the following constrained problem: Given a prior probability distribution q , find the posterior distribution p minimizing the relative entropy (also known as the Kullback-Leibler divergence) with respect to q under the constraint that mean (p ) is fixed and large. We show that solutions to this problem are approximately Gaussian. We discuss two applications of this result. In the context of dissipative dynamics, the equilibrium distribution of a Brownian particle confined in a strong external field is independent of the shape of the confining potential. We also derive an H -type theorem for evolutionary dynamics: The entropy of the (standardized) distribution of fitness of a population evolving under natural selection is eventually increasing in time.

  9. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    PubMed

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.

  10. Comparison of SOM point densities based on different criteria.

    PubMed

    Kohonen, T

    1999-11-15

    Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.

  11. Gravitational baryogenesis in running vacuum models

    NASA Astrophysics Data System (ADS)

    Oikonomou, V. K.; Pan, Supriya; Nunes, Rafael C.

    2017-08-01

    We study the gravitational baryogenesis mechanism for generating baryon asymmetry in the context of running vacuum models. Regardless of whether these models can produce a viable cosmological evolution, we demonstrate that they produce a nonzero baryon-to-entropy ratio even if the universe is filled with conformal matter. This is a sound difference between the running vacuum gravitational baryogenesis and the Einstein-Hilbert one, since in the latter case, the predicted baryon-to-entropy ratio is zero. We consider two well known and most used running vacuum models and show that the resulting baryon-to-entropy ratio is compatible with the observational data. Moreover, we also show that the mechanism of gravitational baryogenesis may constrain the running vacuum models.

  12. Mixing and electronic entropy contributions to thermal energy storage in low melting point alloys

    NASA Astrophysics Data System (ADS)

    Shamberger, Patrick J.; Mizuno, Yasushi; Talapatra, Anjana A.

    2017-07-01

    Melting of crystalline solids is associated with an increase in entropy due to an increase in configurational, rotational, and other degrees of freedom of a system. However, the magnitude of chemical mixing and electronic degrees of freedom, two significant contributions to the entropy of fusion, remain poorly constrained, even in simple 2 and 3 component systems. Here, we present experimentally measured entropies of fusion in the Sn-Pb-Bi and In-Sn-Bi ternary systems, and decouple mixing and electronic contributions. We demonstrate that electronic effects remain the dominant contribution to the entropy of fusion in multi-component post-transition metal and metalloid systems, and that excess entropy of mixing terms can be equal in magnitude to ideal mixing terms, causing regular solution approximations to be inadequate in the general case. Finally, we explore binary eutectic systems using mature thermodynamic databases, identifying eutectics containing at least one semiconducting intermetallic phase as promising candidates to exceed the entropy of fusion of monatomic endmembers, while simultaneously maintaining low melting points. These results have significant implications for engineering high-thermal conductivity metallic phase change materials to store thermal energy.

  13. Characterization of complexity in the electroencephalograph activity of Alzheimer's disease based on fuzzy entropy.

    PubMed

    Cao, Yuzhen; Cai, Lihui; Wang, Jiang; Wang, Ruofan; Yu, Haitao; Cao, Yibin; Liu, Jing

    2015-08-01

    In this paper, experimental neurophysiologic recording and statistical analysis are combined to investigate the nonlinear characteristic and the cognitive function of the brain. Fuzzy approximate entropy and fuzzy sample entropy are applied to characterize the model-based simulated series and electroencephalograph (EEG) series of Alzheimer's disease (AD). The effectiveness and advantages of these two kinds of fuzzy entropy are first verified through the simulated EEG series generated by the alpha rhythm model, including stronger relative consistency and robustness. Furthermore, in order to detect the abnormality of irregularity and chaotic behavior in the AD brain, the complexity features based on these two fuzzy entropies are extracted in the delta, theta, alpha, and beta bands. It is demonstrated that, due to the introduction of fuzzy set theory, the fuzzy entropies could better distinguish EEG signals of AD from that of the normal than the approximate entropy and sample entropy. Moreover, the entropy values of AD are significantly decreased in the alpha band, particularly in the temporal brain region, such as electrode T3 and T4. In addition, fuzzy sample entropy could achieve higher group differences in different brain regions and higher average classification accuracy of 88.1% by support vector machine classifier. The obtained results prove that fuzzy sample entropy may be a powerful tool to characterize the complexity abnormalities of AD, which could be helpful in further understanding of the disease.

  14. Characterization of complexity in the electroencephalograph activity of Alzheimer's disease based on fuzzy entropy

    NASA Astrophysics Data System (ADS)

    Cao, Yuzhen; Cai, Lihui; Wang, Jiang; Wang, Ruofan; Yu, Haitao; Cao, Yibin; Liu, Jing

    2015-08-01

    In this paper, experimental neurophysiologic recording and statistical analysis are combined to investigate the nonlinear characteristic and the cognitive function of the brain. Fuzzy approximate entropy and fuzzy sample entropy are applied to characterize the model-based simulated series and electroencephalograph (EEG) series of Alzheimer's disease (AD). The effectiveness and advantages of these two kinds of fuzzy entropy are first verified through the simulated EEG series generated by the alpha rhythm model, including stronger relative consistency and robustness. Furthermore, in order to detect the abnormality of irregularity and chaotic behavior in the AD brain, the complexity features based on these two fuzzy entropies are extracted in the delta, theta, alpha, and beta bands. It is demonstrated that, due to the introduction of fuzzy set theory, the fuzzy entropies could better distinguish EEG signals of AD from that of the normal than the approximate entropy and sample entropy. Moreover, the entropy values of AD are significantly decreased in the alpha band, particularly in the temporal brain region, such as electrode T3 and T4. In addition, fuzzy sample entropy could achieve higher group differences in different brain regions and higher average classification accuracy of 88.1% by support vector machine classifier. The obtained results prove that fuzzy sample entropy may be a powerful tool to characterize the complexity abnormalities of AD, which could be helpful in further understanding of the disease.

  15. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  16. Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Liu, Ti C.; Mitra, Sunanda

    1996-06-01

    Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.

  17. A hybrid video codec based on extended block sizes, recursive integer transforms, improved interpolation, and flexible motion representation

    NASA Astrophysics Data System (ADS)

    Karczewicz, Marta; Chen, Peisong; Joshi, Rajan; Wang, Xianglin; Chien, Wei-Jung; Panchal, Rahul; Coban, Muhammed; Chong, In Suk; Reznik, Yuriy A.

    2011-01-01

    This paper describes video coding technology proposal submitted by Qualcomm Inc. in response to a joint call for proposal (CfP) issued by ITU-T SG16 Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG11 (MPEG) in January 2010. Proposed video codec follows a hybrid coding approach based on temporal prediction, followed by transform, quantization, and entropy coding of the residual. Some of its key features are extended block sizes (up to 64x64), recursive integer transforms, single pass switched interpolation filters with offsets (single pass SIFO), mode dependent directional transform (MDDT) for intra-coding, luma and chroma high precision filtering, geometry motion partitioning, adaptive motion vector resolution. It also incorporates internal bit-depth increase (IBDI), and modified quadtree based adaptive loop filtering (QALF). Simulation results are presented for a variety of bit rates, resolutions and coding configurations to demonstrate the high compression efficiency achieved by the proposed video codec at moderate level of encoding and decoding complexity. For random access hierarchical B configuration (HierB), the proposed video codec achieves an average BD-rate reduction of 30.88c/o compared to the H.264/AVC alpha anchor. For low delay hierarchical P (HierP) configuration, the proposed video codec achieves an average BD-rate reduction of 32.96c/o and 48.57c/o, compared to the H.264/AVC beta and gamma anchors, respectively.

  18. Black hole quantum spectrum

    NASA Astrophysics Data System (ADS)

    Corda, Christian

    2013-12-01

    Introducing a black hole (BH) effective temperature, which takes into account both the non-strictly thermal character of Hawking radiation and the countable behavior of emissions of subsequent Hawking quanta, we recently re-analysed BH quasi-normal modes (QNMs) and interpreted them naturally in terms of quantum levels. In this work we improve such an analysis removing some approximations that have been implicitly used in our previous works and obtaining the corrected expressions for the formulas of the horizon's area quantization and the number of quanta of area and hence also for Bekenstein-Hawking entropy, its subleading corrections and the number of micro-states, i.e. quantities which are fundamental to realize the underlying quantum gravity theory, like functions of the QNMs quantum "overtone" number n and, in turn, of the BH quantum excited level. An approximation concerning the maximum value of n is also corrected. On the other hand, our previous results were strictly corrected only for scalar and gravitational perturbations. Here we show that the discussion holds also for vector perturbations. The analysis is totally consistent with the general conviction that BHs result in highly excited states representing both the "hydrogen atom" and the "quasi-thermal emission" in quantum gravity. Our BH model is somewhat similar to the semi-classical Bohr's model of the structure of a hydrogen atom. The thermal approximation of previous results in the literature is consistent with the results in this paper. In principle, such results could also have important implications for the BH information paradox.

  19. The Coulomb problem on a 3-sphere and Heun polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellucci, Stefano; Yeghikyan, Vahagn; Yerevan State University, Alex-Manoogian st. 1, 00025 Yerevan

    2013-08-15

    The paper studies the quantum mechanical Coulomb problem on a 3-sphere. We present a special parametrization of the ellipto-spheroidal coordinate system suitable for the separation of variables. After quantization we get the explicit form of the spectrum and present an algebraic equation for the eigenvalues of the Runge-Lentz vector. We also present the wave functions expressed via Heun polynomials.

  20. Deformation quantization with separation of variables of an endomorphism bundle

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander

    2014-01-01

    Given a holomorphic Hermitian vector bundle E and a star-product with separation of variables on a pseudo-Kähler manifold, we construct a star product on the sections of the endomorphism bundle of the dual bundle E∗ which also has the appropriately generalized property of separation of variables. For this star product we prove a generalization of Gammelgaard's graph-theoretic formula.

  1. Detection of cracks in shafts with the Approximated Entropy algorithm

    NASA Astrophysics Data System (ADS)

    Sampaio, Diego Luchesi; Nicoletti, Rodrigo

    2016-05-01

    The Approximate Entropy is a statistical calculus used primarily in the fields of Medicine, Biology, and Telecommunication for classifying and identifying complex signal data. In this work, an Approximate Entropy algorithm is used to detect cracks in a rotating shaft. The signals of the cracked shaft are obtained from numerical simulations of a de Laval rotor with breathing cracks modelled by the Fracture Mechanics. In this case, one analysed the vertical displacements of the rotor during run-up transients. The results show the feasibility of detecting cracks from 5% depth, irrespective of the unbalance of the rotating system and crack orientation in the shaft. The results also show that the algorithm can differentiate the occurrence of crack only, misalignment only, and crack + misalignment in the system. However, the algorithm is sensitive to intrinsic parameters p (number of data points in a sample vector) and f (fraction of the standard deviation that defines the minimum distance between two sample vectors), and good results are only obtained by appropriately choosing their values according to the sampling rate of the signal.

  2. Black hole entropy in massive Type IIA

    NASA Astrophysics Data System (ADS)

    Benini, Francesco; Khachatryan, Hrachya; Milan, Paolo

    2018-02-01

    We study the entropy of static dyonic BPS black holes in AdS4 in 4d N=2 gauged supergravities with vector and hyper multiplets, and how the entropy can be reproduced with a microscopic counting of states in the AdS/CFT dual field theory. We focus on the particular example of BPS black holes in AdS{\\hspace{0pt}}4 × S6 in massive Type IIA, whose dual three-dimensional boundary description is known and simple. To count the states in field theory we employ a supersymmetric topologically twisted index, which can be computed exactly with localization techniques. We find a perfect match at leading order.

  3. Granger Causality and Transfer Entropy Are Equivalent for Gaussian Variables

    NASA Astrophysics Data System (ADS)

    Barnett, Lionel; Barrett, Adam B.; Seth, Anil K.

    2009-12-01

    Granger causality is a statistical notion of causal influence based on prediction via vector autoregression. Developed originally in the field of econometrics, it has since found application in a broader arena, particularly in neuroscience. More recently transfer entropy, an information-theoretic measure of time-directed information transfer between jointly dependent processes, has gained traction in a similarly wide field. While it has been recognized that the two concepts must be related, the exact relationship has until now not been formally described. Here we show that for Gaussian variables, Granger causality and transfer entropy are entirely equivalent, thus bridging autoregressive and information-theoretic approaches to data-driven causal inference.

  4. On the Problem of Bandwidth Partitioning in FDD Block-Fading Single-User MISO/SIMO Systems

    NASA Astrophysics Data System (ADS)

    Ivrlač, Michel T.; Nossek, Josef A.

    2008-12-01

    We report on our research activity on the problem of how to optimally partition the available bandwidth of frequency division duplex, multi-input single-output communication systems, into subbands for the uplink, the downlink, and the feedback. In the downlink, the transmitter applies coherent beamforming based on quantized channel information which is obtained by feedback from the receiver. As feedback takes away resources from the uplink, which could otherwise be used to transfer payload data, it is highly desirable to reserve the "right" amount of uplink resources for the feedback. Under the assumption of random vector quantization, and a frequency flat, independent and identically distributed block-fading channel, we derive closed-form expressions for both the feedback quantization and bandwidth partitioning which jointly maximize the sum of the average payload data rates of the downlink and the uplink. While we do introduce some approximations to facilitate mathematical tractability, the analytical solution is asymptotically exact as the number of antennas approaches infinity, while for systems with few antennas, it turns out to be a fairly accurate approximation. In this way, the obtained results are meaningful for practical communication systems, which usually can only employ a few antennas.

  5. Radiation and matter: Electrodynamics postulates and Lorenz gauge

    NASA Astrophysics Data System (ADS)

    Bobrov, V. B.; Trigger, S. A.; van Heijst, G. J.; Schram, P. P.

    2016-11-01

    In general terms, we have considered matter as the system of charged particles and quantized electromagnetic field. For consistent description of the thermodynamic properties of matter, especially in an extreme state, the problem of quantization of the longitudinal and scalar potentials should be solved. In this connection, we pay attention that the traditional postulates of electrodynamics, which claim that only electric and magnetic fields are observable, is resolved by denial of the statement about validity of the Maxwell equations for microscopic fields. The Maxwell equations, as the generalization of experimental data, are valid only for averaged values. We show that microscopic electrodynamics may be based on postulation of the d'Alembert equations for four-vector of the electromagnetic field potential. The Lorenz gauge is valid for the averages potentials (and provides the implementation of the Maxwell equations for averages). The suggested concept overcomes difficulties under the electromagnetic field quantization procedure being in accordance with the results of quantum electrodynamics. As a result, longitudinal and scalar photons become real rather than virtual and may be observed in principle. The longitudinal and scalar photons provide not only the Coulomb interaction of charged particles, but also allow the electrical Aharonov-Bohm effect.

  6. Applications of wavelet-based compression to multidimensional Earth science data

    NASA Technical Reports Server (NTRS)

    Bradley, Jonathan N.; Brislawn, Christopher M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  7. Dynamic optimization and its relation to classical and quantum constrained systems

    NASA Astrophysics Data System (ADS)

    Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo

    2017-08-01

    We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.

  8. The centripetal force law and the equation of motion for a particle on a curved hypersurface

    NASA Astrophysics Data System (ADS)

    Hu, L. D.; Lian, D. K.; Liu, Q. H.

    2016-12-01

    It is pointed out that the current form of the extrinsic equation of motion for a particle constrained to remain on a hypersurface is in fact a half-finished version; for it is established without regard to the fact that the particle can never depart from the geodesics on the surface. Once this fact is taken into consideration, the equation takes the same form as that for the centripetal force law, provided that the symbols are re-interpreted so that the law is applicable for higher dimensions. The controversial issue of constructing operator forms of these equations is addressed, and our studies show the quantization of constrained system based on the extrinsic equation of motion is preferable.

  9. Direct Images, Fields of Hilbert Spaces, and Geometric Quantization

    NASA Astrophysics Data System (ADS)

    Lempert, László; Szőke, Róbert

    2014-04-01

    Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.

  10. Active Planning, Sensing and Recognition Using a Resource-Constrained Discriminant POMDP

    DTIC Science & Technology

    2014-06-28

    classes of military vehicles, with sample images shown in Fig. 1. The vehicles were captured from various angles. 4785 images with depression angles 17...and 30◦ are used for training, and 4351 images with depression angles 15◦ and 45◦ are used for testing. The azimuth angles are quantized into 12...selection by collecting the engine sounds for the 8 vehicle classes from the Youtube . The sounds are attenuated differently in 6 view directions

  11. Direct Volume Rendering with Shading via Three-Dimensional Textures

    NASA Technical Reports Server (NTRS)

    VanGelder, Allen; Kim, Kwansik

    1996-01-01

    A new and easy-to-implement method for direct volume rendering that uses 3D texture maps for acceleration, and incorporates directional lighting, is described. The implementation, called Voltx, produces high-quality images at nearly interactive speeds on workstations with hardware support for three-dimensional texture maps. Previously reported methods did not incorporate a light model, and did not address issues of multiple texture maps for large volumes. Our research shows that these extensions impact performance by about a factor of ten. Voltx supports orthographic, perspective, and stereo views. This paper describes the theory and implementation of this technique, and compares it to the shear-warp factorization approach. A rectilinear data set is converted into a three-dimensional texture map containing color and opacity information. Quantized normal vectors and a lookup table provide efficiency. A new tesselation of the sphere is described, which serves as the basis for normal-vector quantization. A new gradient-based shading criterion is described, in which the gradient magnitude is interpreted in the context of the field-data value and the material classification parameters, and not in isolation. In the rendering phase, the texture map is applied to a stack of parallel planes, which effectively cut the texture into many slabs. The slabs are composited to form an image.

  12. Image Classification of Ribbed Smoked Sheet using Learning Vector Quantization

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Pulungan, A. F.; Faza, S.; Budiarto, R.

    2017-01-01

    Natural rubber is an important export commodity in Indonesia, which can be a major contributor to national economic development. One type of rubber used as rubber material exports is Ribbed Smoked Sheet (RSS). The quantity of RSS exports depends on the quality of RSS. RSS rubber quality has been assigned in SNI 06-001-1987 and the International Standards of Quality and Packing for Natural Rubber Grades (The Green Book). The determination of RSS quality is also known as the sorting process. In the rubber factones, the sorting process is still done manually by looking and detecting at the levels of air bubbles on the surface of the rubber sheet by naked eyes so that the result is subjective and not so good. Therefore, a method is required to classify RSS rubber automatically and precisely. We propose some image processing techniques for the pre-processing, zoning method for feature extraction and Learning Vector Quantization (LVQ) method for classifying RSS rubber into two grades, namely RSS1 and RSS3. We used 120 RSS images as training dataset and 60 RSS images as testing dataset. The result shows that our proposed method can give 89% of accuracy and the best perform epoch is in the fifteenth epoch.

  13. Local conditions for the generalized covariant entropy bound

    NASA Astrophysics Data System (ADS)

    Gao, Sijie; Lemos, José P.

    2005-04-01

    A set of sufficient conditions for the generalized covariant entropy bound given by Strominger and Thompson is as follows: Suppose that the entropy of matter can be described by an entropy current sa. Let ka be any null vector along L and s≡-kasa. Then the generalized bound can be derived from the following conditions: (i) s'≤2πTabkakb, where s'=ka∇as and Tab is the stress-energy tensor; (ii) on the initial 2-surface B, s(0)≤-1/4θ(0), where θ is the expansion of ka. We prove that condition (ii) alone can be used to divide a spacetime into two regions: The generalized entropy bound holds for all light sheets residing in the region where s<-1/4θ and fails for those in the region where s>-1/4θ. We check the validity of these conditions in FRW flat universe and a scalar field spacetime. Some apparent violations of the entropy bounds in the two spacetimes are discussed. These holographic bounds are important in the formulation of the holographic principle.

  14. PHASE QUANTIZATION STUDY OF SPATIAL LIGHT MODULATOR FOR EXTREME HIGH-CONTRAST IMAGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, Jiangpei; Ren, Deqing, E-mail: jpdou@niaot.ac.cn, E-mail: jiangpeidou@gmail.com

    2016-11-20

    Direct imaging of exoplanets by reflected starlight is extremely challenging due to the large luminosity ratio to the primary star. Wave-front control is a critical technique to attenuate the speckle noise in order to achieve an extremely high contrast. We present a phase quantization study of a spatial light modulator (SLM) for wave-front control to meet the contrast requirement of detection of a terrestrial planet in the habitable zone of a solar-type star. We perform the numerical simulation by employing the SLM with different phase accuracy and actuator numbers, which are related to the achievable contrast. We use an optimizationmore » algorithm to solve the quantization problems that is matched to the controllable phase step of the SLM. Two optical configurations are discussed with the SLM located before and after the coronagraph focal plane mask. The simulation result has constrained the specification for SLM phase accuracy in the above two optical configurations, which gives us a phase accuracy of 0.4/1000 and 1/1000 waves to achieve a contrast of 10{sup -10}. Finally, we have demonstrated that an SLM with more actuators can deliver a competitive contrast performance on the order of 10{sup -10} in comparison to that by using a deformable mirror.« less

  15. Quantization and training of object detection networks with low-precision weights and activations

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie

    2018-01-01

    As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.

  16. Phase Quantization Study of Spatial Light Modulator for Extreme High-contrast Imaging

    NASA Astrophysics Data System (ADS)

    Dou, Jiangpei; Ren, Deqing

    2016-11-01

    Direct imaging of exoplanets by reflected starlight is extremely challenging due to the large luminosity ratio to the primary star. Wave-front control is a critical technique to attenuate the speckle noise in order to achieve an extremely high contrast. We present a phase quantization study of a spatial light modulator (SLM) for wave-front control to meet the contrast requirement of detection of a terrestrial planet in the habitable zone of a solar-type star. We perform the numerical simulation by employing the SLM with different phase accuracy and actuator numbers, which are related to the achievable contrast. We use an optimization algorithm to solve the quantization problems that is matched to the controllable phase step of the SLM. Two optical configurations are discussed with the SLM located before and after the coronagraph focal plane mask. The simulation result has constrained the specification for SLM phase accuracy in the above two optical configurations, which gives us a phase accuracy of 0.4/1000 and 1/1000 waves to achieve a contrast of 10-10. Finally, we have demonstrated that an SLM with more actuators can deliver a competitive contrast performance on the order of 10-10 in comparison to that by using a deformable mirror.

  17. Entanglement in a model for Hawking radiation: An application of quadratic algebras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bambah, Bindu A., E-mail: bbsp@uohyd.ernet.in; Mukku, C., E-mail: mukku@iiit.ac.in; Shreecharan, T., E-mail: shreecharan@gmail.com

    2013-03-15

    Quadratic polynomially deformed su(1,1) and su(2) algebras are utilized in model Hamiltonians to show how the gravitational system consisting of a black hole, infalling radiation and outgoing (Hawking) radiation can be solved exactly. The models allow us to study the long-time behaviour of the black hole and its outgoing modes. In particular, we calculate the bipartite entanglement entropies of subsystems consisting of (a) infalling plus outgoing modes and (b) black hole modes plus the infalling modes, using the Janus-faced nature of the model. The long-time behaviour also gives us glimpses of modifications in the character of Hawking radiation. Finally, wemore » study the phenomenon of superradiance in our model in analogy with atomic Dicke superradiance. - Highlights: Black-Right-Pointing-Pointer We examine a toy model for Hawking radiation with quantized black hole modes. Black-Right-Pointing-Pointer We use quadratic polynomially deformed su(1,1) algebras to study its entanglement properties. Black-Right-Pointing-Pointer We study the 'Dicke Superradiance' in black hole radiation using quadratically deformed su(2) algebras. Black-Right-Pointing-Pointer We study the modification of the thermal character of Hawking radiation due to quantized black hole modes.« less

  18. A study of non-local holography in the AdS/CFT correspondence

    NASA Astrophysics Data System (ADS)

    Hamilton, Alex

    This thesis is broadly composed of three topics. After giving a brief overview of the origins of the AdS/CFT duality, we describe a way of representing local bulk fields as quasi-local CFT operators. We show how these smeared boundary operators encode the holographic radial-scale duality, and how this can lead to degrees of freedom consistent with Bekenstein's entropy. We also gain insight into the BTZ black hole, with the horizon, singularity, and thermality arising naturally via these operators. As another aspect of AdS/CFT, we will be interested in the fate of giant gravitons under a marginal deformation. We review the construction and fluctuation spectrum of giants, and then proceed to evaluate them in two different Penrose limits of Lunin and Maldacena's gamma deformed geometry. We find only one to be stable, and describe how the degeneracy of the spectrum is partially broken. Finally, we make a first step towards cosmological particle production in string theory by introducing a first quantized alternative approach to the standard method of calculation. We show how the same calculation can be done with Green's Functions---objects which are well defined in a first quantized setting (such as string theory).

  19. Monitoring of Time-Dependent System Profiles by Multiplex Gas Chromatography with Maximum Entropy Demodulation

    NASA Technical Reports Server (NTRS)

    Becker, Joseph F.; Valentin, Jose

    1996-01-01

    The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.

  20. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  1. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  2. Communication: phase transitions, criticality, and three-phase coexistence in constrained cell models.

    PubMed

    Nayhouse, Michael; Kwon, Joseph Sang-Il; Orkoulas, G

    2012-05-28

    In simulation studies of fluid-solid transitions, the solid phase is usually modeled as a constrained system in which each particle is confined to move in a single Wigner-Seitz cell. The constrained cell model has been used in the determination of fluid-solid coexistence via thermodynamic integration and other techniques. In the present work, the phase diagram of such a constrained system of Lennard-Jones particles is determined from constant-pressure simulations. The pressure-density isotherms exhibit inflection points which are interpreted as the mechanical stability limit of the solid phase. The phase diagram of the constrained system contains a critical and a triple point. The temperature and pressure at the critical and the triple point are both higher than those of the unconstrained system due to the reduction in the entropy caused by the single occupancy constraint.

  3. High pressure synthesis of a hexagonal close-packed phase of the high-entropy alloy CrMnFeCoNi

    NASA Astrophysics Data System (ADS)

    Tracy, Cameron L.; Park, Sulgiye; Rittman, Dylan R.; Zinkle, Steven J.; Bei, Hongbin; Lang, Maik; Ewing, Rodney C.; Mao, Wendy L.

    2017-05-01

    High-entropy alloys, near-equiatomic solid solutions of five or more elements, represent a new strategy for the design of materials with properties superior to those of conventional alloys. However, their phase space remains constrained, with transition metal high-entropy alloys exhibiting only face- or body-centered cubic structures. Here, we report the high-pressure synthesis of a hexagonal close-packed phase of the prototypical high-entropy alloy CrMnFeCoNi. This martensitic transformation begins at 14 GPa and is attributed to suppression of the local magnetic moments, destabilizing the initial fcc structure. Similar to fcc-to-hcp transformations in Al and the noble gases, the transformation is sluggish, occurring over a range of >40 GPa. However, the behaviour of CrMnFeCoNi is unique in that the hcp phase is retained following decompression to ambient pressure, yielding metastable fcc-hcp mixtures. This demonstrates a means of tuning the structures and properties of high-entropy alloys in a manner not achievable by conventional processing techniques.

  4. LANDMARK-BASED SPEECH RECOGNITION: REPORT OF THE 2004 JOHNS HOPKINS SUMMER WORKSHOP.

    PubMed

    Hasegawa-Johnson, Mark; Baker, James; Borys, Sarah; Chen, Ken; Coogan, Emily; Greenberg, Steven; Juneja, Amit; Kirchhoff, Katrin; Livescu, Karen; Mohan, Srividya; Muller, Jennifer; Sonmez, Kemal; Wang, Tianyu

    2005-01-01

    Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multiframe acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.

  5. Wigner functions on non-standard symplectic vector spaces

    NASA Astrophysics Data System (ADS)

    Dias, Nuno Costa; Prata, João Nuno

    2018-01-01

    We consider the Weyl quantization on a flat non-standard symplectic vector space. We focus mainly on the properties of the Wigner functions defined therein. In particular we show that the sets of Wigner functions on distinct symplectic spaces are different but have non-empty intersections. This extends previous results to arbitrary dimension and arbitrary (constant) symplectic structure. As a by-product we introduce and prove several concepts and results on non-standard symplectic spaces which generalize those on the standard symplectic space, namely, the symplectic spectrum, Williamson's theorem, and Narcowich-Wigner spectra. We also show how Wigner functions on non-standard symplectic spaces behave under the action of an arbitrary linear coordinate transformation.

  6. The BRST complex of homological Poisson reduction

    NASA Astrophysics Data System (ADS)

    Müller-Lennert, Martin

    2017-02-01

    BRST complexes are differential graded Poisson algebras. They are associated with a coisotropic ideal J of a Poisson algebra P and provide a description of the Poisson algebra (P/J)^J as their cohomology in degree zero. Using the notion of stable equivalence introduced in Felder and Kazhdan (Contemporary Mathematics 610, Perspectives in representation theory, 2014), we prove that any two BRST complexes associated with the same coisotropic ideal are quasi-isomorphic in the case P = R[V] where V is a finite-dimensional symplectic vector space and the bracket on P is induced by the symplectic structure on V. As a corollary, the cohomology of the BRST complexes is canonically associated with the coisotropic ideal J in the symplectic case. We do not require any regularity assumptions on the constraints generating the ideal J. We finally quantize the BRST complex rigorously in the presence of infinitely many ghost variables and discuss the uniqueness of the quantization procedure.

  7. Observation of Landau levels in potassium-intercalated graphite under a zero magnetic field

    PubMed Central

    Guo, Donghui; Kondo, Takahiro; Machida, Takahiro; Iwatake, Keigo; Okada, Susumu; Nakamura, Junji

    2012-01-01

    The charge carriers in graphene are massless Dirac fermions and exhibit a relativistic Landau-level quantization in a magnetic field. Recently, it has been reported that, without any external magnetic field, quantized energy levels have been also observed from strained graphene nanobubbles on a platinum surface, which were attributed to the Landau levels of massless Dirac fermions in graphene formed by a strain-induced pseudomagnetic field. Here we show the generation of the Landau levels of massless Dirac fermions on a partially potassium-intercalated graphite surface without applying external magnetic field. Landau levels of massless Dirac fermions indicate the graphene character in partially potassium-intercalated graphite. The generation of the Landau levels is ascribed to a vector potential induced by the perturbation of nearest-neighbour hopping, which may originate from a strain or a gradient of on-site potentials at the perimeters of potassium-free domains. PMID:22990864

  8. Diffractive charmonium spectrum in high energy collisions in the basis light-front quantization approach

    DOE PAGES

    Chen, Guangyao; Li, Yang; Maris, Pieter; ...

    2017-04-14

    Using the charmonium light-front wavefunctions obtained by diagonalizing an effective Hamiltonian with the one-gluon exchange interaction and a confining potential inspired by light-front holography in the basis light-front quantization formalism, we compute production of charmonium states in diffractive deep inelastic scattering and ultra-peripheral heavy ion collisions within the dipole picture. Our method allows us to predict yields of all vector charmonium states below the open flavor thresholds in high-energy deep inelastic scattering, proton-nucleus and ultra-peripheral heavy ion collisions, without introducing any new parameters in the light-front wavefunctions. The obtained charmonium cross section is in reasonable agreement with experimental data atmore » HERA, RHIC and LHC. We observe that the cross-section ratio σΨ(2s)/σJ/Ψ reveals significant independence of model parameters« less

  9. A novel parallel pipeline structure of VP9 decoder

    NASA Astrophysics Data System (ADS)

    Qin, Huabiao; Chen, Wu; Yi, Sijun; Tan, Yunfei; Yi, Huan

    2018-04-01

    To improve the efficiency of VP9 decoder, a novel parallel pipeline structure of VP9 decoder is presented in this paper. According to the decoding workflow, VP9 decoder can be divided into sub-modules which include entropy decoding, inverse quantization, inverse transform, intra prediction, inter prediction, deblocking and pixel adaptive compensation. By analyzing the computing time of each module, hotspot modules are located and the causes of low efficiency of VP9 decoder can be found. Then, a novel pipeline decoder structure is designed by using mixed parallel decoding methods of data division and function division. The experimental results show that this structure can greatly improve the decoding efficiency of VP9.

  10. Rolling bearing fault detection and diagnosis based on composite multiscale fuzzy entropy and ensemble support vector machines

    NASA Astrophysics Data System (ADS)

    Zheng, Jinde; Pan, Haiyang; Cheng, Junsheng

    2017-02-01

    To timely detect the incipient failure of rolling bearing and find out the accurate fault location, a novel rolling bearing fault diagnosis method is proposed based on the composite multiscale fuzzy entropy (CMFE) and ensemble support vector machines (ESVMs). Fuzzy entropy (FuzzyEn), as an improvement of sample entropy (SampEn), is a new nonlinear method for measuring the complexity of time series. Since FuzzyEn (or SampEn) in single scale can not reflect the complexity effectively, multiscale fuzzy entropy (MFE) is developed by defining the FuzzyEns of coarse-grained time series, which represents the system dynamics in different scales. However, the MFE values will be affected by the data length, especially when the data are not long enough. By combining information of multiple coarse-grained time series in the same scale, the CMFE algorithm is proposed in this paper to enhance MFE, as well as FuzzyEn. Compared with MFE, with the increasing of scale factor, CMFE obtains much more stable and consistent values for a short-term time series. In this paper CMFE is employed to measure the complexity of vibration signals of rolling bearings and is applied to extract the nonlinear features hidden in the vibration signals. Also the physically meanings of CMFE being suitable for rolling bearing fault diagnosis are explored. Based on these, to fulfill an automatic fault diagnosis, the ensemble SVMs based multi-classifier is constructed for the intelligent classification of fault features. Finally, the proposed fault diagnosis method of rolling bearing is applied to experimental data analysis and the results indicate that the proposed method could effectively distinguish different fault categories and severities of rolling bearings.

  11. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  12. An Intelligent System for Monitoring the Microgravity Environment Quality On-Board the International Space Station

    NASA Technical Reports Server (NTRS)

    Lin, Paul P.; Jules, Kenol

    2002-01-01

    An intelligent system for monitoring the microgravity environment quality on-board the International Space Station is presented. The monitoring system uses a new approach combining Kohonen's self-organizing feature map, learning vector quantization, and back propagation neural network to recognize and classify the known and unknown patterns. Finally, fuzzy logic is used to assess the level of confidence associated with each vibrating source activation detected by the system.

  13. Measurement of entropy generation within bypass transitional flow

    NASA Astrophysics Data System (ADS)

    Skifton, Richard; Budwig, Ralph; McEligot, Donald; Crepeau, John

    2012-11-01

    A flat plate made from quartz was submersed in the Idaho National Laboratory's Matched Index of Refraction (MIR) flow facility. PIV was utilized to capture spatial vectors maps at near wall locations with five to ten points within the viscous sublayer. Entropy generation was calculated directly from measured velocity fluctuation derivatives. Two flows were studied: a zero pressure gradient and an adverse pressure gradient (β = -0.039). The free stream turbulence intensity to drive bypass transition ranged between 3% (near trailing edge) and 8% (near leading edge). The pointwise entropy generation rate will be utilized as a design parameter to systematically reduce losses. As a second observation, the pointwise entropy can be shown to predict the onset of transitional flow. This research was partially supported by the DOE EPSCOR program, grant DE-SC0004751 and by the Idaho National Laboratory. Center for Advanced Energy Studies.

  14. On the on-shell: the action of AdS4 black holes

    NASA Astrophysics Data System (ADS)

    Halmagyi, Nick; Lal, Shailesh

    2018-03-01

    We compute the on-shell action of static, BPS black holes in AdS4 from N=2 gauged supergravity coupled to vector multiplets and show that for a certain class it is equal to minus the entropy of the black hole. Holographic renormalization is used to demonstrate that with Neumann boundary conditions on the scalar fields, the divergent and finite contributions from the asymptotic boundary vanish. The entropy arises from the extrinsic curvature on Σ g × S 1 evaluated at the horizon, where Σ g may have any genus g ≥ 0. This provides a clarification of the equivalence between the partition function of the twisted ABJM theory on Σ g × S 1 and the entropy of the dual black hole solutions. It also demonstrates that the complete entropy resides on the AdS2 × Σ g horizon geometry, implying the absence of hair for these gravity solutions.

  15. Course 4: Anyons

    NASA Astrophysics Data System (ADS)

    Myrheim, J.

    Contents 1 Introduction 1.1 The concept of particle statistics 1.2 Statistical mechanics and the many-body problem 1.3 Experimental physics in two dimensions 1.4 The algebraic approach: Heisenberg quantization 1.5 More general quantizations 2 The configuration space 2.1 The Euclidean relative space for two particles 2.2 Dimensions d=1,2,3 2.3 Homotopy 2.4 The braid group 3 Schroedinger quantization in one dimension 4 Heisenberg quantization in one dimension 4.1 The coordinate representation 5 Schroedinger quantization in dimension d ≥ 2 5.1 Scalar wave functions 5.2 Homotopy 5.3 Interchange phases 5.4 The statistics vector potential 5.5 The N-particle case 5.6 Chern-Simons theory 6 The Feynman path integral for anyons 6.1 Eigenstates for position and momentum 6.2 The path integral 6.3 Conjugation classes in SN 6.4 The non-interacting case 6.5 Duality of Feynman and Schroedinger quantization 7 The harmonic oscillator 7.1 The two-dimensional harmonic oscillator 7.2 Two anyons in a harmonic oscillator potential 7.3 More than two anyons 7.4 The three-anyon problem 8 The anyon gas 8.1 The cluster and virial expansions 8.2 First and second order perturbative results 8.3 Regularization by periodic boundary conditions 8.4 Regularization by a harmonic oscillator potential 8.5 Bosons and fermions 8.6 Two anyons 8.7 Three anyons 8.8 The Monte Carlo method 8.9 The path integral representation of the coefficients GP 8.10 Exact and approximate polynomials 8.11 The fourth virial coefficient of anyons 8.12 Two polynomial theorems 9 Charged particles in a constant magnetic field 9.1 One particle in a magnetic field 9.2 Two anyons in a magnetic field 9.3 The anyon gas in a magnetic field 10 Interchange phases and geometric phases 10.1 Introduction to geometric phases 10.2 One particle in a magnetic field 10.3 Two particles in a magnetic field 10.4 Interchange of two anyons in potential wells 10.5 Laughlin's theory of the fractional quantum Hall effect

  16. Information and Entropy

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel

    2007-11-01

    What is information? Is it physical? We argue that in a Bayesian theory the notion of information must be defined in terms of its effects on the beliefs of rational agents. Information is whatever constrains rational beliefs and therefore it is the force that induces us to change our minds. This problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), which is designed for updating from arbitrary priors given information in the form of arbitrary constraints, includes as special cases both MaxEnt (which allows arbitrary constraints) and Bayes' rule (which allows arbitrary priors). Thus, ME unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme that allows us to handle problems that lie beyond the reach of either of the two methods separately. I conclude with a couple of simple illustrative examples.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tracy, Cameron L.; Park, Sulgiye; Rittman, Dylan R.

    High-entropy alloys, near-equiatomic solid solutions of five or more elements, represent a new strategy for the design of materials with properties superior to those of conventional alloys. However, their phase space remains constrained, with transition metal high-entropy alloys exhibiting only face- or body-centered cubic structures. Here, we report the high-pressure synthesis of a hexagonal close-packed phase of the prototypical high-entropy alloy CrMnFeCoNi. This martensitic transformation begins at 14 GPa and is attributed to suppression of the local magnetic moments, destabilizing the initial fcc structure. Similar to fcc-to-hcp transformations in Al and the noble gases, the transformation is sluggish, occurring overmore » a range of >40 GPa. However, the behaviour of CrMnFeCoNi is unique in that the hcp phase is retained following decompression to ambient pressure, yielding metastable fcc-hcp mixtures. This demonstrates a means of tuning the structures and properties of high-entropy alloys in a manner not achievable by conventional processing techniques.« less

  18. Energy and maximum norm estimates for nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Olsson, Pelle; Oliger, Joseph

    1994-01-01

    We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.

  19. On families of differential equations on two-torus with all phase-lock areas

    NASA Astrophysics Data System (ADS)

    Glutsyuk, Alexey; Rybnikov, Leonid

    2017-01-01

    We consider two-parametric families of non-autonomous ordinary differential equations on the two-torus with coordinates (x, t) of the type \\overset{\\centerdot}{{x}} =v(x)+A+Bf(t) . We study its rotation number as a function of the parameters (A, B). The phase-lock areas are those level sets of the rotation number function ρ =ρ (A,B) that have non-empty interiors. Buchstaber, Karpov and Tertychnyi studied the case when v(x)=\\sin x in their joint paper. They observed the quantization effect: for every smooth periodic function f(t) the family of equations may have phase-lock areas only for integer rotation numbers. Another proof of this quantization statement was later obtained in a joint paper by Ilyashenko, Filimonov and Ryzhov. This implies a similar quantization effect for every v(x)=a\\sin (mx)+b\\cos (mx)+c and rotation numbers that are multiples of \\frac{1}{m} . We show that for every other analytic vector field v(x) (i.e. having at least two Fourier harmonics with non-zero non-opposite degrees and nonzero coefficients) there exists an analytic periodic function f(t) such that the corresponding family of equations has phase-lock areas for all the rational values of the rotation number.

  20. Perturbative Quantum Gravity and its Relation to Gauge Theory.

    PubMed

    Bern, Zvi

    2002-01-01

    In this review we describe a non-trivial relationship between perturbative gauge theory and gravity scattering amplitudes. At the semi-classical or tree-level, the scattering amplitudes of gravity theories in flat space can be expressed as a sum of products of well defined pieces of gauge theory amplitudes. These relationships were first discovered by Kawai, Lewellen, and Tye in the context of string theory, but hold more generally. In particular, they hold for standard Einstein gravity. A method based on D -dimensional unitarity can then be used to systematically construct all quantum loop corrections order-by-order in perturbation theory using as input the gravity tree amplitudes expressed in terms of gauge theory ones. More generally, the unitarity method provides a means for perturbatively quantizing massless gravity theories without the usual formal apparatus associated with the quantization of constrained systems. As one application, this method was used to demonstrate that maximally supersymmetric gravity is less divergent in the ultraviolet than previously thought.

  1. Wavelet Entropy and Directed Acyclic Graph Support Vector Machine for Detection of Patients with Unilateral Hearing Loss in MRI Scanning

    PubMed Central

    Wang, Shuihua; Yang, Ming; Du, Sidan; Yang, Jiquan; Liu, Bin; Gorriz, Juan M.; Ramírez, Javier; Yuan, Ti-Fei; Zhang, Yudong

    2016-01-01

    Highlights We develop computer-aided diagnosis system for unilateral hearing loss detection in structural magnetic resonance imaging.Wavelet entropy is introduced to extract image global features from brain images. Directed acyclic graph is employed to endow support vector machine an ability to handle multi-class problems.The developed computer-aided diagnosis system achieves an overall accuracy of 95.1% for this three-class problem of differentiating left-sided and right-sided hearing loss from healthy controls. Aim: Sensorineural hearing loss (SNHL) is correlated to many neurodegenerative disease. Now more and more computer vision based methods are using to detect it in an automatic way. Materials: We have in total 49 subjects, scanned by 3.0T MRI (Siemens Medical Solutions, Erlangen, Germany). The subjects contain 14 patients with right-sided hearing loss (RHL), 15 patients with left-sided hearing loss (LHL), and 20 healthy controls (HC). Method: We treat this as a three-class classification problem: RHL, LHL, and HC. Wavelet entropy (WE) was selected from the magnetic resonance images of each subjects, and then submitted to a directed acyclic graph support vector machine (DAG-SVM). Results: The 10 repetition results of 10-fold cross validation shows 3-level decomposition will yield an overall accuracy of 95.10% for this three-class classification problem, higher than feedforward neural network, decision tree, and naive Bayesian classifier. Conclusions: This computer-aided diagnosis system is promising. We hope this study can attract more computer vision method for detecting hearing loss. PMID:27807415

  2. Wavelet Entropy and Directed Acyclic Graph Support Vector Machine for Detection of Patients with Unilateral Hearing Loss in MRI Scanning.

    PubMed

    Wang, Shuihua; Yang, Ming; Du, Sidan; Yang, Jiquan; Liu, Bin; Gorriz, Juan M; Ramírez, Javier; Yuan, Ti-Fei; Zhang, Yudong

    2016-01-01

    Highlights We develop computer-aided diagnosis system for unilateral hearing loss detection in structural magnetic resonance imaging.Wavelet entropy is introduced to extract image global features from brain images. Directed acyclic graph is employed to endow support vector machine an ability to handle multi-class problems.The developed computer-aided diagnosis system achieves an overall accuracy of 95.1% for this three-class problem of differentiating left-sided and right-sided hearing loss from healthy controls. Aim: Sensorineural hearing loss (SNHL) is correlated to many neurodegenerative disease. Now more and more computer vision based methods are using to detect it in an automatic way. Materials: We have in total 49 subjects, scanned by 3.0T MRI (Siemens Medical Solutions, Erlangen, Germany). The subjects contain 14 patients with right-sided hearing loss (RHL), 15 patients with left-sided hearing loss (LHL), and 20 healthy controls (HC). Method: We treat this as a three-class classification problem: RHL, LHL, and HC. Wavelet entropy (WE) was selected from the magnetic resonance images of each subjects, and then submitted to a directed acyclic graph support vector machine (DAG-SVM). Results: The 10 repetition results of 10-fold cross validation shows 3-level decomposition will yield an overall accuracy of 95.10% for this three-class classification problem, higher than feedforward neural network, decision tree, and naive Bayesian classifier. Conclusions: This computer-aided diagnosis system is promising. We hope this study can attract more computer vision method for detecting hearing loss.

  3. Broad Absorption Line Quasar catalogues with Supervised Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scaringi, Simone; Knigge, Christian; Cottis, Christopher E.

    2008-12-05

    We have applied a Learning Vector Quantization (LVQ) algorithm to SDSS DR5 quasar spectra in order to create a large catalogue of broad absorption line quasars (BALQSOs). We first discuss the problems with BALQSO catalogues constructed using the conventional balnicity and/or absorption indices (BI and AI), and then describe the supervised LVQ network we have trained to recognise BALQSOs. The resulting BALQSO catalogue should be substantially more robust and complete than BI-or AI-based ones.

  4. Quantization of Motor Activity into Primitives and Time-Frequency Atoms Using Independent Component Analysis and Matching Pursuit Algorithms

    DTIC Science & Technology

    2001-10-25

    form: (1) A is a scaling factor, t is time and r a coordinate vector describing the limb configuration. We...combination of limb state and EMG. In our early examination of EMG we detected underlying groups of muscles and phases of activity by inspection and...representations of EEG or other biological signals has been thoroughly explored. Such components might be used as a basis for neuroprosthetic control

  5. Research on the feature extraction and pattern recognition of the distributed optical fiber sensing signal

    NASA Astrophysics Data System (ADS)

    Wang, Bingjie; Sun, Qi; Pi, Shaohua; Wu, Hongyan

    2014-09-01

    In this paper, feature extraction and pattern recognition of the distributed optical fiber sensing signal have been studied. We adopt Mel-Frequency Cepstral Coefficient (MFCC) feature extraction, wavelet packet energy feature extraction and wavelet packet Shannon entropy feature extraction methods to obtain sensing signals (such as speak, wind, thunder and rain signals, etc.) characteristic vectors respectively, and then perform pattern recognition via RBF neural network. Performances of these three feature extraction methods are compared according to the results. We choose MFCC characteristic vector to be 12-dimensional. For wavelet packet feature extraction, signals are decomposed into six layers by Daubechies wavelet packet transform, in which 64 frequency constituents as characteristic vector are respectively extracted. In the process of pattern recognition, the value of diffusion coefficient is introduced to increase the recognition accuracy, while keeping the samples for testing algorithm the same. Recognition results show that wavelet packet Shannon entropy feature extraction method yields the best recognition accuracy which is up to 97%; the performance of 12-dimensional MFCC feature extraction method is less satisfactory; the performance of wavelet packet energy feature extraction method is the worst.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klima, Matej; Kucharik, MIlan; Shashkov, Mikhail Jurievich

    We analyze several new and existing approaches for limiting tensor quantities in the context of deviatoric stress remapping in an ALE numerical simulation of elastic flow. Remapping and limiting of the tensor component-by-component is shown to violate radial symmetry of derived variables such as elastic energy or force. Therefore, we have extended the symmetry-preserving Vector Image Polygon algorithm, originally designed for limiting vector variables. This limiter constrains the vector (in our case a vector of independent tensor components) within the convex hull formed by the vectors from surrounding cells – an equivalent of the discrete maximum principle in scalar variables.more » We compare this method with a limiter designed specifically for deviatoric stress limiting which aims to constrain the J 2 invariant that is proportional to the specific elastic energy and scale the tensor accordingly. We also propose a method which involves remapping and limiting the J 2 invariant independently using known scalar techniques. The deviatoric stress tensor is then scaled to match this remapped invariant, which guarantees conservation in terms of elastic energy.« less

  7. Hairy strings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahakian, Vatche

    Zero modes of the world-sheet spinors of a closed string can source higher order moments of the bulk supergravity fields. In this work, we analyze various configurations of closed strings focusing on the imprints of the quantized spinor vacuum expectation values onto the tails of bulk fields. We identify supersymmetric arrangements for which all multipole charges vanish; while for others, we find that one is left with Neveu-Schwarz-Neveu-Schwarz, and Ramond-Ramond dipole and quadrupole moments. Our analysis is exhaustive with respect to all the bosonic fields of the bulk and to all higher order moments. We comment on the relevance ofmore » these results to entropy computations of hairy black holes of a single charge or more, and to open/closed string duality.« less

  8. Single-Atom Demonstration of the Quantum Landauer Principle

    NASA Astrophysics Data System (ADS)

    Yan, L. L.; Xiong, T. P.; Rehan, K.; Zhou, F.; Liang, D. F.; Chen, L.; Zhang, J. Q.; Yang, W. L.; Ma, Z. H.; Feng, M.

    2018-05-01

    One of the outstanding challenges to information processing is the eloquent suppression of energy consumption in the execution of logic operations. The Landauer principle sets an energy constraint in deletion of a classical bit of information. Although some attempts have been made to experimentally approach the fundamental limit restricted by this principle, exploring the Landauer principle in a purely quantum mechanical fashion is still an open question. Employing a trapped ultracold ion, we experimentally demonstrate a quantum version of the Landauer principle, i.e., an equality associated with the energy cost of information erasure in conjunction with the entropy change of the associated quantized environment. Our experimental investigation substantiates an intimate link between information thermodynamics and quantum candidate systems for information processing.

  9. How quantization of gravity leads to a discrete space-time

    NASA Astrophysics Data System (ADS)

    't Hooft, Gerard

    2016-03-01

    The idea that the Planck length is the smallest unit of length, and the Planck time the smallest unit of time, is natural, and has been suggested many times. One can, however, also derive this more rigorously, using nothing more than the fact that black holes emit particles, according to Hawking's theory, and that these particles interact gravitationally. It is then observed that the particles, going in and out, form quantum states bouncing against the horizon. The dynamics of these microstates can be described in a partial wave expansion, but Hawking's expression for the entropy then requires a cut-off in the transverse momentum, in the form of a Brillouin zone, and this implies that these particles live on a lattice.

  10. Quantum mechanics of a constrained particle on an ellipsoid: Bein formalism and Geometric momentum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panahi, H., E-mail: t-panahi@guilan.ac.ir; Jahangiri, L., E-mail: laleh.jahangiry@yahoo.com

    2016-09-15

    In this work we apply the Dirac method in order to obtain the classical relations for a particle on an ellipsoid. We also determine the quantum mechanical form of these relations by using Dirac quantization. Then by considering the canonical commutation relations between the position and momentum operators in terms of curved coordinates, we try to propose the suitable representations for momentum operator that satisfy the obtained commutators between position and momentum in Euclidean space. We see that our representations for momentum operators are the same as geometric one.

  11. The lattice of trumping majorization for 4D probability vectors and 2D catalysts.

    PubMed

    Bosyk, Gustavo M; Freytes, Hector; Bellomo, Guido; Sergioli, Giuseppe

    2018-02-27

    The transformation of an initial bipartite pure state into a target one by means of local operations and classical communication and entangled-assisted by a catalyst defines a partial order between probability vectors. This partial order, so-called trumping majorization, is based on tensor products and the majorization relation. Here, we aim to study order properties of trumping majorization. We show that the trumping majorization partial order is indeed a lattice for four dimensional probability vectors and two dimensional catalysts. In addition, we show that the subadditivity and supermodularity of the Shannon entropy on the majorization lattice are inherited by the trumping majorization lattice. Finally, we provide a suitable definition of distance for four dimensional probability vectors.

  12. Methods of Contemporary Gauge Theory

    NASA Astrophysics Data System (ADS)

    Makeenko, Yuri

    2002-08-01

    Preface; Part I. Path Integrals: 1. Operator calculus; 2. Second quantization; 3. Quantum anomalies from path integral; 4. Instantons in quantum mechanics; Part II. Lattice Gauge Theories: 5. Observables in gauge theories; 6. Gauge fields on a lattice; 7. Lattice methods; 8. Fermions on a lattice; 9. Finite temperatures; Part III. 1/N Expansion: 10. O(N) vector models; 11. Multicolor QCD; 12. QCD in loop space; 13. Matrix models; Part IV. Reduced Models: 14. Eguchi-Kawai model; 15. Twisted reduced models; 16. Non-commutative gauge theories.

  13. Methods of Contemporary Gauge Theory

    NASA Astrophysics Data System (ADS)

    Makeenko, Yuri

    2005-11-01

    Preface; Part I. Path Integrals: 1. Operator calculus; 2. Second quantization; 3. Quantum anomalies from path integral; 4. Instantons in quantum mechanics; Part II. Lattice Gauge Theories: 5. Observables in gauge theories; 6. Gauge fields on a lattice; 7. Lattice methods; 8. Fermions on a lattice; 9. Finite temperatures; Part III. 1/N Expansion: 10. O(N) vector models; 11. Multicolor QCD; 12. QCD in loop space; 13. Matrix models; Part IV. Reduced Models: 14. Eguchi-Kawai model; 15. Twisted reduced models; 16. Non-commutative gauge theories.

  14. Vector/Matrix Quantization for Narrow-Bandwidth Digital Speech Compression.

    DTIC Science & Technology

    1982-09-01

    8217o 0 -X -u -vc "oi ’" o 0 00i MN nM I -r -: I I Ir , I C 64 ut c 4c -C ;6 19I *~I C’ I I I 1 Kall 9 I I V4 S.0 M r4) ** al Iw* 0 0 10* 0 f 65 signal...Prediction of the Speech Wave, JASA Vol. 50, pp. 637-655, April 1971 . - 2. I. Itakura and S. Saito, Analysis Synthesis Telephony Based Upon the Maximum

  15. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  16. Modulated error diffusion CGHs for neural nets

    NASA Astrophysics Data System (ADS)

    Vermeulen, Pieter J. E.; Casasent, David P.

    1990-05-01

    New modulated error diffusion CGHs (computer generated holograms) for optical computing are considered. Specific attention is given to their use in optical matrix-vector, associative processor, neural net and optical interconnection architectures. We consider lensless CGH systems (many CGHs use an external Fourier transform (FT) lens), the Fresnel sampling requirements, the effects of finite CGH apertures (sample and hold inputs), dot size correction (for laser recorders), and new applications for this novel encoding method (that devotes attention to quantization noise effects).

  17. Quantum angular momentum diffusion of rigid bodies

    NASA Astrophysics Data System (ADS)

    Papendell, Birthe; Stickler, Benjamin A.; Hornberger, Klaus

    2017-12-01

    We show how to describe the diffusion of the quantized angular momentum vector of an arbitrarily shaped rigid rotor as induced by its collisional interaction with an environment. We present the general form of the Lindblad-type master equation and relate it to the orientational decoherence of an asymmetric nanoparticle in the limit of small anisotropies. The corresponding diffusion coefficients are derived for gas particles scattering off large molecules and for ambient photons scattering off dielectric particles, using the elastic scattering amplitudes.

  18. Moving Forward to Constrain the Shear Viscosity of QCD Matter

    DOE PAGES

    Denicol, Gabriel; Monnai, Akihiko; Schenke, Björn

    2016-05-26

    In this work, we demonstrate that measurements of rapidity differential anisotropic flow in heavy-ion collisions can constrain the temperature dependence of the shear viscosity to entropy density ratio η/s of QCD matter. Comparing results from hydrodynamic calculations with experimental data from the RHIC, we find evidence for a small η/s ≈ 0.04 in the QCD crossover region and a strong temperature dependence in the hadronic phase. A temperature independent η/s is disfavored by the data. We further show that measurements of the event-by-event flow as a function of rapidity can be used to independently constrain the initial state fluctuations inmore » three dimensions and the temperature dependent transport properties of QCD matter.« less

  19. Maximum Entropy Production As a Framework for Understanding How Living Systems Evolve, Organize and Function

    NASA Astrophysics Data System (ADS)

    Vallino, J. J.; Algar, C. K.; Huber, J. A.; Fernandez-Gonzalez, N.

    2014-12-01

    The maximum entropy production (MEP) principle holds that non equilibrium systems with sufficient degrees of freedom will likely be found in a state that maximizes entropy production or, analogously, maximizes potential energy destruction rate. The theory does not distinguish between abiotic or biotic systems; however, we will show that systems that can coordinate function over time and/or space can potentially dissipate more free energy than purely Markovian processes (such as fire or a rock rolling down a hill) that only maximize instantaneous entropy production. Biological systems have the ability to store useful information acquired via evolution and curated by natural selection in genomic sequences that allow them to execute temporal strategies and coordinate function over space. For example, circadian rhythms allow phototrophs to "predict" that sun light will return and can orchestrate metabolic machinery appropriately before sunrise, which not only gives them a competitive advantage, but also increases the total entropy production rate compared to systems that lack such anticipatory control. Similarly, coordination over space, such a quorum sensing in microbial biofilms, can increase acquisition of spatially distributed resources and free energy and thereby enhance entropy production. In this talk we will develop a modeling framework to describe microbial biogeochemistry based on the MEP conjecture constrained by information and resource availability. Results from model simulations will be compared to laboratory experiments to demonstrate the usefulness of the MEP approach.

  20. Rényi entropy, stationarity, and entanglement of the conformal scalar

    NASA Astrophysics Data System (ADS)

    Lee, Jeongseog; Lewkowycz, Aitor; Perlmutter, Eric; Safdi, Benjamin R.

    2015-03-01

    We extend previous work on the perturbative expansion of the Rényi entropy, S q , around q = 1 for a spherical entangling surface in a general CFT. Applied to conformal scalar fields in various spacetime dimensions, the results appear to conflict with the known conformal scalar Rényi entropies. On the other hand, the perturbative results agree with known Rényi entropies in a variety of other theories, including theories of free fermions and vector fields and theories with Einstein gravity duals. We propose a resolution stemming from a careful consideration of boundary conditions near the entangling surface. This is equivalent to a proper treatment of total-derivative terms in the definition of the modular Hamiltonian. As a corollary, we are able to resolve an outstanding puzzle in the literature regarding the Rényi entropy of super-Yang-Mills near q = 1. A related puzzle regards the question of stationarity of the renormalized entanglement entropy (REE) across a circle for a (2+1)-dimensional massive scalar field. We point out that the boundary contributions to the modular Hamiltonian shed light on the previously-observed non-stationarity. Moreover, IR divergences appear in perturbation theory about the massless fixed point that inhibit our ability to reliably calculate the REE at small non-zero mass.

  1. Selective-imaging camera

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  2. Exact Theory of Compressible Fluid Turbulence

    NASA Astrophysics Data System (ADS)

    Drivas, Theodore; Eyink, Gregory

    2017-11-01

    We obtain exact results for compressible turbulence with any equation of state, using coarse-graining/filtering. We find two mechanisms of turbulent kinetic energy dissipation: scale-local energy cascade and ``pressure-work defect'', or pressure-work at viscous scales exceeding that in the inertial-range. Planar shocks in an ideal gas dissipate all kinetic energy by pressure-work defect, but the effect is omitted by standard LES modeling of pressure-dilatation. We also obtain a novel inverse cascade of thermodynamic entropy, injected by microscopic entropy production, cascaded upscale, and removed by large-scale cooling. This nonlinear process is missed by the Kovasznay linear mode decomposition, treating entropy as a passive scalar. For small Mach number we recover the incompressible ``negentropy cascade'' predicted by Obukhov. We derive exact Kolmogorov 4/5th-type laws for energy and entropy cascades, constraining scaling exponents of velocity, density, and internal energy to sub-Kolmogorov values. Although precise exponents and detailed physics are Mach-dependent, our exact results hold at all Mach numbers. Flow realizations at infinite Reynolds are ``dissipative weak solutions'' of compressible Euler equations, similarly as Onsager proposed for incompressible turbulence.

  3. Fast and efficient search for MPEG-4 video using adjacent pixel intensity difference quantization histogram feature

    NASA Astrophysics Data System (ADS)

    Lee, Feifei; Kotani, Koji; Chen, Qiu; Ohmi, Tadahiro

    2010-02-01

    In this paper, a fast search algorithm for MPEG-4 video clips from video database is proposed. An adjacent pixel intensity difference quantization (APIDQ) histogram is utilized as the feature vector of VOP (video object plane), which had been reliably applied to human face recognition previously. Instead of fully decompressed video sequence, partially decoded data, namely DC sequence of the video object are extracted from the video sequence. Combined with active search, a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by total 15 hours of video contained of TV programs such as drama, talk, news, etc. to search for given 200 MPEG-4 video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 2 % in drama and news categories are achieved, which are more accurately and robust than conventional fast video search algorithm.

  4. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  5. Passive forensics for copy-move image forgery using a method based on DCT and SVD.

    PubMed

    Zhao, Jie; Guo, Jichang

    2013-12-10

    As powerful image editing tools are widely used, the demand for identifying the authenticity of an image is much increased. Copy-move forgery is one of the tampering techniques which are frequently used. Most existing techniques to expose this forgery need to improve the robustness for common post-processing operations and fail to precisely locate the tampering region especially when there are large similar or flat regions in the image. In this paper, a robust method based on DCT and SVD is proposed to detect this specific artifact. Firstly, the suspicious image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized by a quantization matrix to obtain a more robust representation of each block. Secondly, each quantized block is divided non-overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold. Experiment results demonstrate that our proposed method can effectively detect multiple copy-move forgery and precisely locate the duplicated regions, even when an image was distorted by Gaussian blurring, AWGN, JPEG compression and their mixed operations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Resolution-Adaptive Hybrid MIMO Architectures for Millimeter Wave Communications

    NASA Astrophysics Data System (ADS)

    Choi, Jinseok; Evans, Brian L.; Gatherer, Alan

    2017-12-01

    In this paper, we propose a hybrid analog-digital beamforming architecture with resolution-adaptive ADCs for millimeter wave (mmWave) receivers with large antenna arrays. We adopt array response vectors for the analog combiners and derive ADC bit-allocation (BA) solutions in closed form. The BA solutions reveal that the optimal number of ADC bits is logarithmically proportional to the RF chain's signal-to-noise ratio raised to the 1/3 power. Using the solutions, two proposed BA algorithms minimize the mean square quantization error of received analog signals under a total ADC power constraint. Contributions of this paper include 1) ADC bit-allocation algorithms to improve communication performance of a hybrid MIMO receiver, 2) approximation of the capacity with the BA algorithm as a function of channels, and 3) a worst-case analysis of the ergodic rate of the proposed MIMO receiver that quantifies system tradeoffs and serves as the lower bound. Simulation results demonstrate that the BA algorithms outperform a fixed-ADC approach in both spectral and energy efficiency, and validate the capacity and ergodic rate formula. For a power constraint equivalent to that of fixed 4-bit ADCs, the revised BA algorithm makes the quantization error negligible while achieving 22% better energy efficiency. Having negligible quantization error allows existing state-of-the-art digital beamformers to be readily applied to the proposed system.

  7. Spin vectors in the Koronis family: III. (832) Karin

    NASA Astrophysics Data System (ADS)

    Slivan, Stephen M.; Molnar, Lawrence A.

    2012-08-01

    Studies of asteroid families constrain models of asteroid collisions and evolution processes, and the Karin cluster within the Koronis family is among the youngest families known (Nesvorný, D., Bottke, Jr., W.F., Dones, L., Levison, H.F. [2002]. Nature 417, 720-722). (832) Karin itself is by far the largest member of the Karin cluster, thus knowledge of Karin's spin vector is important to constrain family formation and evolution models that include spin, and to test whether its spin properties are consistent with the Karin cluster being a very young family. We observed rotation lightcurves of Karin during its four consecutive apparitions in 2006-2009, and combined the new observations with previously published lightcurves to determine its spin vector orientation and preliminary model shape. Karin is a prograde rotator with a period of (18.352 ± 0.003) h, spin obliquity near (42 ± 5)°, and pole ecliptic longitude near either (52 ± 5)° or (230 ± 5)°. The spin vector and shape results for Karin will constrain models of family formation that include spin properties; in the meantime we briefly discuss Karin's own spin in the context of those of other members of the Karin cluster and the parent body's siblings in the Koronis family.

  8. A robust H.264/AVC video watermarking scheme with drift compensation.

    PubMed

    Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  9. A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation

    PubMed Central

    Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376

  10. Identifying images of handwritten digits using deep learning in H2O

    NASA Astrophysics Data System (ADS)

    Sadhasivam, Jayakumar; Charanya, R.; Kumar, S. Harish; Srinivasan, A.

    2017-11-01

    Automatic digit recognition is of popular interest today. Deep learning techniques make it possible for object recognition in image data. Perceiving the digit has turned into a fundamental part as far as certifiable applications. Since, digits are composed in various styles in this way to distinguish the digit it is important to perceive and arrange it with the assistance of machine learning methods. This exploration depends on supervised learning vector quantization neural system arranged under counterfeit artificial neural network. The pictures of digits are perceived, prepared and tried. After the system is made digits are prepared utilizing preparing dataset vectors and testing is connected to the pictures of digits which are separated to each other by fragmenting the picture and resizing the digit picture as needs be for better precision.

  11. Design of 2D time-varying vector fields.

    PubMed

    Chen, Guoning; Kwatra, Vivek; Wei, Li-Yi; Hansen, Charles D; Zhang, Eugene

    2012-10-01

    Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects.

  12. Entropy of balance - some recent results

    PubMed Central

    2010-01-01

    Background Entropy when applied to biological signals is expected to reflect the state of the biological system. However the physiological interpretation of the entropy is not always straightforward. When should high entropy be interpreted as a healthy sign, and when as marker of deteriorating health? We address this question for the particular case of human standing balance and the Center of Pressure data. Methods We have measured and analyzed balance data of 136 participants (young, n = 45; elderly, n = 91) comprising in all 1085 trials, and calculated the Sample Entropy (SampEn) for medio-lateral (M/L) and anterior-posterior (A/P) Center of Pressure (COP) together with the Hurst self-similariy (ss) exponent α using Detrended Fluctuation Analysis (DFA). The COP was measured with a force plate in eight 30 seconds trials with eyes closed, eyes open, foam, self-perturbation and nudge conditions. Results 1) There is a significant difference in SampEn for the A/P-direction between the elderly and the younger groups Old > young. 2) For the elderly we have in general A/P > M/L. 3) For the younger group there was no significant A/P-M/L difference with the exception for the nudge trials where we had the reverse situation, A/P < M/L. 4) For the elderly we have, Eyes Closed > Eyes Open. 5) In case of the Hurst ss-exponent we have for the elderly, M/L > A/P. Conclusions These results seem to be require some modifications of the more or less established attention-constraint interpretation of entropy. This holds that higher entropy correlates with a more automatic and a less constrained mode of balance control, and that a higher entropy reflects, in this sense, a more efficient balancing. PMID:20670457

  13. Theory of the Quantized Hall Conductance in Periodic Systems: a Topological Analysis.

    NASA Astrophysics Data System (ADS)

    Czerwinski, Michael Joseph

    The integral quantization of the Hall conductance in two-dimensional periodic systems is investigated from a topological point of view. Attention is focused on the contributions from the electronic sub-bands which arise from perturbed Landau levels. After reviewing the theoretical work leading to the identification of the Hall conductance as a topological quantum number, both a determination and interpretation of these quantized values for the sub-band conductances is made. It is shown that the Hall conductance of each sub-band can be regarded as the sum of two terms which will be referred to as classical and nonclassical. Although each of these contributions individually leads to a fractional conductance, the sum of these two contributions does indeed yield an integer. These integral conductances are found to be given by the solution of a simple Diophantine equation which depends on the periodic perturbation. A connection between the quantized value of the Hall conductance and the covering of real space by the zeroes of the sub-band wavefunctions allows for a determination of these conductances under more general potentials. A method is described for obtaining the conductance values from only those states bordering the Brillouin zone, and not the states in its interior. This method is demonstrated to give Hall conductances in agreement with those obtained from the Diophantine equation for the sinusoidal potential case explored earlier. Generalizing a simple gauge invariance argument from real space to k-space, a k-space 'vector potential' is introduced. This allows for a explicit identification of the Hall conductance with the phase winding number of the sub-band wavefunction around the Brillouin zone. The previously described division of the Hall conductance into classical and nonclassical contributions is in this way made more rigorous; based on periodicity considerations alone, these terms are identified as the winding numbers associated with (i) the basis states and (ii) the coefficients of these basis states, respectively. In this way a general Diophantine equation, independent of the periodic potential, is obtained. Finally, the use of the 'parallel transport' of state vectors in the determination of an overall phase convention for these states is described. This is seen to lead to a simple and straightforward method for determining the Hall conductance. This method is based on the states directly, without reference to the particular component wavefunctions of these states. Mention is made of the generality of calculations of this type, within the context of the geometric (or Berry) phases acquired by systems under an adiabatic modification of their environment.

  14. Classification of postural profiles among mouth-breathing children by learning vector quantization.

    PubMed

    Mancini, F; Sousa, F S; Hummel, A D; Falcão, A E J; Yi, L C; Ortolani, C F; Sigulem, D; Pisa, I T

    2011-01-01

    Mouth breathing is a chronic syndrome that may bring about postural changes. Finding characteristic patterns of changes occurring in the complex musculoskeletal system of mouth-breathing children has been a challenge. Learning vector quantization (LVQ) is an artificial neural network model that can be applied for this purpose. The aim of the present study was to apply LVQ to determine the characteristic postural profiles shown by mouth-breathing children, in order to further understand abnormal posture among mouth breathers. Postural training data on 52 children (30 mouth breathers and 22 nose breathers) and postural validation data on 32 children (22 mouth breathers and 10 nose breathers) were used. The performance of LVQ and other classification models was compared in relation to self-organizing maps, back-propagation applied to multilayer perceptrons, Bayesian networks, naive Bayes, J48 decision trees, k, and k-nearest-neighbor classifiers. Classifier accuracy was assessed by means of leave-one-out cross-validation, area under ROC curve (AUC), and inter-rater agreement (Kappa statistics). By using the LVQ model, five postural profiles for mouth-breathing children could be determined. LVQ showed satisfactory results for mouth-breathing and nose-breathing classification: sensitivity and specificity rates of 0.90 and 0.95, respectively, when using the training dataset, and 0.95 and 0.90, respectively, when using the validation dataset. The five postural profiles for mouth-breathing children suggested by LVQ were incorporated into application software for classifying the severity of mouth breathers' abnormal posture.

  15. Information loss in effective field theory: Entanglement and thermal entropies

    NASA Astrophysics Data System (ADS)

    Boyanovsky, Daniel

    2018-03-01

    Integrating out high energy degrees of freedom to yield a low energy effective field theory leads to a loss of information with a concomitant increase in entropy. We obtain the effective field theory of a light scalar field interacting with heavy fields after tracing out the heavy degrees of freedom from the time evolved density matrix. The initial density matrix describes the light field in its ground state and the heavy fields in equilibrium at a common temperature T . For T =0 , we obtain the reduced density matrix in a perturbative expansion; it reveals an emergent mixed state as a consequence of the entanglement between light and heavy fields. We obtain the effective action that determines the time evolution of the reduced density matrix for the light field in a nonperturbative Dyson resummation of one-loop correlations of the heavy fields. The Von-Neumann entanglement entropy associated with the reduced density matrix is obtained for the nonresonant and resonant cases in the asymptotic long time limit. In the nonresonant case the reduced density matrix displays an incipient thermalization albeit with a wave-vector, time and coupling dependent effective temperature as a consequence of memory of initial conditions. The entanglement entropy is time independent and is the thermal entropy for this effective, nonequilibrium temperature. In the resonant case the light field fully thermalizes with the heavy fields, the reduced density matrix loses memory of the initial conditions and the entanglement entropy becomes the thermal entropy of the light field. We discuss the relation between the entanglement entropy ultraviolet divergences and renormalization.

  16. Constrained dynamics of two interacting relativistic particles in the Faddeev-Jackiw symplectic framework

    NASA Astrophysics Data System (ADS)

    Rodríguez-Tzompantzi, Omar

    2018-05-01

    The Faddeev-Jackiw symplectic formalism for constrained systems is applied to analyze the dynamical content of a model describing two massive relativistic particles with interaction, which can also be interpreted as a bigravity model in one dimension. We systematically investigate the nature of the physical constraints, for which we also determine the zero-modes structure of the corresponding symplectic matrix. After identifying the whole set of constraints, we find out the transformation laws for all the set of dynamical variables corresponding to gauge symmetries, encoded in the remaining zero modes. In addition, we use an appropriate gauge-fixing procedure, the conformal gauge, to compute the quantization brackets (Faddeev-Jackiw brackets) and also obtain the number of physical degree of freedom. Finally, we argue that this symplectic approach can be helpful for assessing physical constraints and understanding the gauge structure of theories of interacting spin-2 fields.

  17. Diffeomorphism invariance and black hole entropy

    NASA Astrophysics Data System (ADS)

    Huang, Chao-Guang; Guo, Han-Ying; Wu, Xiaoning

    2003-11-01

    The Noether-charge and the Hamiltonian realizations for the diff(M) algebra in diffeomorphism-invariant gravitational theories without a cosmological constant in any dimension are studied in a covariant formalism. We analyze how the Hamiltonian functionals form the diff(M) algebra under the Poisson brackets and show how the Noether charges with respect to the diffeomorphism generated by the vector fields and their variations in n-dimensional general relativity form this algebra. The asymptotic behaviors of vector fields generating diffeomorphism of the manifold with boundaries are discussed. It is shown that the “central extension” for a large class of vector fields is always zero on the Killing horizon. We also check whether choosing the vector fields near the horizon may pick up the Virasoro algebra. The conclusion is unfortunately negative in any dimension.

  18. Viscoelasticity and pattern formations in stock market indices

    NASA Astrophysics Data System (ADS)

    Gündüz, Güngör; Gündüz, Aydın

    2017-06-01

    The viscoelastic and thermodynamic properties of four stock indices, namely, DJI, Nasdaq-100, Nasdaq-Composite, and S&P were analyzed for a period of 30 years from 1986 to 2015. The asset values (or index) can be placed into Aristotelian `potentiality-actuality' framework by using scattering diagram. Thus, the index values can be transformed into vectorial forms in a scattering diagram, and each vector can be split into its horizontal and vertical components. According to viscoelastic theory, the horizontal component represents the conservative, and the vertical component represents the dissipative behavior. The related storage and the loss modulus of these components are determined and then work-like and heat-like terms are calculated. It is found that the change of storage and loss modulus with Wiener noise (W) exhibit interesting patterns. The loss modulus shows a featherlike pattern, whereas the storage modulus shows figurative man-like pattern. These patterns are formed due to branchings in the system and imply that stock indices do have a kind of `fine-order' which can be detected when the change of modulus values are plotted with respect to Wiener noise. In theoretical calculations it is shown that the tips of the featherlike patterns stay at negative W values, but get closer to W = 0 as the drift in the system increases. The shift of the tip point from W = 0 indicates that the price change involves higher number of positive Wiener number corrections than the negative Wiener. The work-like and heat-like terms also exhibit patterns but with different appearance than modulus patterns. The decisional changes of people are reflected as the arrows in the scattering diagram and the propagation path of these vectors resemble the path of crack propagation. The distribution of the angle between two subsequent vectors shows a peak at 90°, indicating that the path mostly obeys the crack path occurring in hard objects. Entropy mimics the Wiener noise in the evolution of stock index value although they describe different properties. Entropy fluctuates at fast increase and fast fall of index value, and fluctuation becomes very high at minimum values of the index. The curvature of a circle passing from the two ends of the vector and the point of intersection of its horizontal and vertical components designates the reactivity involved in the market and the radius of circle behaves somehow similar to entropy and Wiener noise. The change of entropy and Wiener noise with radius exhibits patterns with four branches.

  19. Asymptotically spacelike warped anti-de Sitter spacetimes in generalized minimal massive gravity

    NASA Astrophysics Data System (ADS)

    Setare, M. R.; Adami, H.

    2017-06-01

    In this paper we show that warped AdS3 black hole spacetime is a solution of the generalized minimal massive gravity (GMMG) and introduce suitable boundary conditions for asymptotically warped AdS3 spacetimes. Then we find the Killing vector fields such that transformations generated by them preserve the considered boundary conditions. We calculate the conserved charges which correspond to the obtained Killing vector fields and show that the algebra of the asymptotic conserved charges is given as the semi direct product of the Virasoro algebra with U(1) current algebra. We use a particular Sugawara construction to reconstruct the conformal algebra. Thus, we are allowed to use the Cardy formula to calculate the entropy of the warped black hole. We demonstrate that the gravitational entropy of the warped black hole exactly coincides with what we obtain via Cardy’s formula. As we expect, the warped Cardy formula also gives us exactly the same result as we obtain from the usual Cardy’s formula. We calculate mass and angular momentum of the warped black hole and then check that obtained mass, angular momentum and entropy to satisfy the first law of the black hole mechanics. According to the results of this paper we believe that the dual theory of the warped AdS3 black hole solution of GMMG is a warped CFT.

  20. The canonical quantization of chaotic maps on the torus

    NASA Astrophysics Data System (ADS)

    Rubin, Ron Shai

    In this thesis, a quantization method for classical maps on the torus is presented. The quantum algebra of observables is defined as the quantization of measurable functions on the torus with generators exp (2/pi ix) and exp (2/pi ip). The Hilbert space we use remains the infinite-dimensional L2/ (/IR, dx). The dynamics is given by a unitary quantum propagator such that as /hbar /to 0, the classical dynamics is returned. We construct such a quantization for the Kronecker map, the cat map, the baker's map, the kick map, and the Harper map. For the cat map, we find the same for the propagator on the plane the same integral kernel conjectured in (HB) using semiclassical methods. We also define a quantum 'integral over phase space' as a trace over the quantum algebra. Using this definition, we proceed to define quantum ergodicity and mixing for maps on the torus. We prove that the quantum cat map and Kronecker map are both ergodic, but only the cat map is mixing, true to its classical origins. For Planck's constant satisfying the integrality condition h = 1/N, with N/in doubz+, we construct an explicit isomorphism between L2/ (/IR, dx) and the Hilbert space of sections of an N-dimensional vector bundle over a θ-torus T2 of boundary conditions. The basis functions are distributions in L2/ (/IR, dx), given by an infinite comb of Dirac δ-functions. In Bargmann space these distributions take on the form of Jacobi ϑ-functions. Transformations from position to momentum representation can be implemented via a finite N-dimensional discrete Fourier transform. With the θ-torus, we provide a connection between the finite-dimensional quantum maps given in the physics literature and the canonical quantization presented here and found in the language of pseudo-differential operators elsewhere in mathematics circles. Specifically, at a fixed point of the dynamics on the θ-torus, we return a finite-dimensional matrix propagator. We present this connection explicitly for several examples.

  1. A new method for the prediction of chatter stability lobes based on dynamic cutting force simulation model and support vector machine

    NASA Astrophysics Data System (ADS)

    Peng, Chong; Wang, Lun; Liao, T. Warren

    2015-10-01

    Currently, chatter has become the critical factor in hindering machining quality and productivity in machining processes. To avoid cutting chatter, a new method based on dynamic cutting force simulation model and support vector machine (SVM) is presented for the prediction of chatter stability lobes. The cutting force is selected as the monitoring signal, and the wavelet energy entropy theory is used to extract the feature vectors. A support vector machine is constructed using the MATLAB LIBSVM toolbox for pattern classification based on the feature vectors derived from the experimental cutting data. Then combining with the dynamic cutting force simulation model, the stability lobes diagram (SLD) can be estimated. Finally, the predicted results are compared with existing methods such as zero-order analytical (ZOA) and semi-discretization (SD) method as well as actual cutting experimental results to confirm the validity of this new method.

  2. Learning probability distributions from smooth observables and the maximum entropy principle: some remarks

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Monasson, Rémi

    2015-09-01

    The maximum entropy principle (MEP) is a very useful working hypothesis in a wide variety of inference problems, ranging from biological to engineering tasks. To better understand the reasons of the success of MEP, we propose a statistical-mechanical formulation to treat the space of probability distributions constrained by the measures of (experimental) observables. In this paper we first review the results of a detailed analysis of the simplest case of randomly chosen observables. In addition, we investigate by numerical and analytical means the case of smooth observables, which is of practical relevance. Our preliminary results are presented and discussed with respect to the efficiency of the MEP.

  3. Effects of viscous pressure on warm inflationary generalized cosmic Chaplygin gas model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharif, M.; Saleem, Rabia, E-mail: msharif.math@pu.edu.pk, E-mail: rabiasaleem1988@yahoo.com

    This paper is devoted to study the effects of bulk viscous pressure on an inflationary generalized cosmic Chaplygin gas model using FRW background. The matter contents of the universe are assumed to be inflaton and imperfect fluid. We evaluate inflaton fields, potentials and entropy density for variable as well as constant dissipation and bulk viscous coefficients in weak as well as high dissipative regimes during intermediate era. In order to discuss inflationary perturbations, we evaluate entropy density, scalar (tensor) power spectra, their corresponding spectral indices, tensor-scalar ratio and running of spectral index in terms of inflaton which are constrained usingmore » recent Planck, WMAP7 and Bicep2 probes.« less

  4. Model-based VQ for image data archival, retrieval and distribution

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  5. Transition amplitude for two-time physics

    NASA Astrophysics Data System (ADS)

    Frederico, João E.; Rivelles, Victor O.

    2010-07-01

    We present the transition amplitude for a particle moving in a space with two times and D space dimensions having an Sp(2,R) local symmetry and an SO(D,2) rigid symmetry. It was obtained from the BRST-BFV quantization with a unique gauge choice. We show that by constraining the initial and final points of this amplitude to lie on some hypersurface of the D+2 space the resulting amplitude reproduces well-known systems in lower dimensions. This work provides an alternative way to derive the effects of two-time physics where all the results come from a single transition amplitude.

  6. Transition amplitude for two-time physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frederico, Joao E.; Rivelles, Victor O.; Instituto de Fisica, Universidade de Sao Paulo, Caixa Postal 66318, 05314-970, Sao Paulo, SP

    2010-07-15

    We present the transition amplitude for a particle moving in a space with two times and D space dimensions having an Sp(2,R) local symmetry and an SO(D,2) rigid symmetry. It was obtained from the BRST-BFV quantization with a unique gauge choice. We show that by constraining the initial and final points of this amplitude to lie on some hypersurface of the D+2 space the resulting amplitude reproduces well-known systems in lower dimensions. This work provides an alternative way to derive the effects of two-time physics where all the results come from a single transition amplitude.

  7. Functional determinants, index theorems, and exact quantum black hole entropy

    NASA Astrophysics Data System (ADS)

    Murthy, Sameer; Reys, Valentin

    2015-12-01

    The exact quantum entropy of BPS black holes can be evaluated using localization in supergravity. An important ingredient in this program, that has been lacking so far, is the one-loop effect arising from the quadratic fluctuations of the exact deformation (the QV operator). We compute the fluctuation determinant for vector multiplets and hyper multiplets around Q-invariant off-shell configurations in four-dimensional N=2 supergravity with AdS 2 × S 2 boundary conditions, using the Atiyah-Bott fixed-point index theorem and a subsequent zeta function regularization. Our results extend the large-charge on-shell entropy computations in the literature to a regime of finite charges. Based on our results, we present an exact formula for the quantum entropy of BPS black holes in N=2 supergravity. We explain cancellations concerning 1/8 -BPS black holes in N=8 supergravity that were observed in arXiv:1111.1161. We also make comments about the interpretation of a logarithmic term in the topological string partition function in the low energy supergravity theory.

  8. The research of "blind" spot in the LVQ network

    NASA Astrophysics Data System (ADS)

    Guo, Zhanjie; Nan, Shupo; Wang, Xiaoli

    2017-04-01

    Nowadays competitive neural network has been widely used in the pattern recognition, classification and other aspects, and show the great advantages compared with the traditional clustering methods. But the competitive neural networks still has inadequate in many aspects, and it needs to be further improved. Based on the learning Vector Quantization Network proposed by Learning Kohonen [1], this paper resolve the issue of the large training error, when there are "blind" spots in a network through the introduction of threshold value learning rules and finally programs the realization with Matlab.

  9. Research on conceptual/innovative design for the life cycle

    NASA Technical Reports Server (NTRS)

    Cagan, Jonathan; Agogino, Alice M.

    1990-01-01

    The goal of this research is developing and integrating qualitative and quantitative methods for life cycle design. The definition of the problem includes formal computer-based methods limited to final detailing stages of design; CAD data bases do not capture design intent or design history; and life cycle issues were ignored during early stages of design. Viewgraphs outline research in conceptual design; the SYMON (SYmbolic MONotonicity analyzer) algorithm; multistart vector quantization optimization algorithm; intelligent manufacturing: IDES - Influence Diagram Architecture; and 1st PRINCE (FIRST PRINciple Computational Evaluator).

  10. On the Maxwellian distribution, symmetric form, and entropy conservation for the Euler equations

    NASA Technical Reports Server (NTRS)

    Deshpande, S. M.

    1986-01-01

    The Euler equations of gas dynamics have some very interesting properties in that the flux vector is a homogeneous function of the unknowns and the equations can be cast in symmetric hyperbolic form and satisfy the entropy conservation. The Euler equations are the moments of the Boltzmann equation of the kinetic theory of gases when the velocity distribution function is a Maxwellian. The present paper shows the relationship between the symmetrizability and the Maxwellian velocity distribution. The entropy conservation is in terms of the H-function, which is a slight modification of the H-function first introduced by Boltzmann in his famous H-theorem. In view of the H-theorem, it is suggested that the development of total H-diminishing (THD) numerical methods may be more profitable than the usual total variation diminishing (TVD) methods for obtaining wiggle-free solutions.

  11. Linear time relational prototype based learning.

    PubMed

    Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara

    2012-10-01

    Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.

  12. Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps

    NASA Technical Reports Server (NTRS)

    Gerson, Ira A.; Jasiuk, Mark A.

    1990-01-01

    Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.

  13. Forsterite Shock Temperatures and Entropy: New Scaling Laws for Impact Melting and Vaporization

    NASA Astrophysics Data System (ADS)

    Davies, E.; Root, S.; Kraus, R. G.; Townsend, J. P.; Spaulding, D.; Stewart, S. T.; Jacobsen, S. B.; Fratanduono, D.; Millot, M. A.; Mattsson, T. R.; Hanshaw, H. L.

    2017-12-01

    The observed masses, radii and temperatures of thousands of extra-solar planets have challenged our theoretical understanding of planet formation and planetary structures. Planetary materials are subject to extreme pressures and temperatures during formation and within the present-day interiors of large bodies. Here, we focus on improving understanding of the physical properties of rocky planets for calculations of internal structure and the outcomes of giant impacts. We performed flyer plate impact experiments on forsterite [Mg2SiO4] on the Z-Machine at Sandia National Laboratory and decaying shock temperature measurements at the Omega EP laser at U. Rochester. At Z, planar, supported shock waves are generated in single crystal samples, permitting observation of both compressed and released states. Using available static and dynamic thermodynamic data, we calculate absolute entropy and heat capacity along the forsterite shock Hugoniot. Entropy and heat capacity on the Hugoniot are larger than previous estimates. Our data constrain the thermodynamic properties of forsterite liquid at high pressures and temperatures and the amount of melt and vapor produced during impact events. For an ambient pressure of 1 bar, shock-vaporization begins upon reaching the liquid region on the forsterite Hugoniot (about 200 GPa). Using hydrocode simulations of giant impacts between rocky planets with forsterite mantles and iron cores and the new experimentally-constrained forsterite shock entropy, we present a new scaling law for the fraction of mantle that is melted or vaporized by the initial shock wave. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. Prepared by LLNL under Contract DE-AC52-07NA27344. Prepared by the Center for Frontiers in High Energy Density Science

  14. Quantization and superselection sectors III: Multiply connected spaces and indistinguishable particles

    NASA Astrophysics Data System (ADS)

    Landsman, N. P. Klaas

    2016-09-01

    We reconsider the (non-relativistic) quantum theory of indistinguishable particles on the basis of Rieffel’s notion of C∗-algebraic (“strict”) deformation quantization. Using this formalism, we relate the operator approach of Messiah and Greenberg (1964) to the configuration space approach pioneered by Souriau (1967), Laidlaw and DeWitt-Morette (1971), Leinaas and Myrheim (1977), and others. In dimension d > 2, the former yields bosons, fermions, and paraparticles, whereas the latter seems to leave room for bosons and fermions only, apparently contradicting the operator approach as far as the admissibility of parastatistics is concerned. To resolve this, we first prove that in d > 2 the topologically non-trivial configuration spaces of the second approach are quantized by the algebras of observables of the first. Secondly, we show that the irreducible representations of the latter may be realized by vector bundle constructions, among which the line bundles recover the results of the second approach. Mathematically speaking, representations on higher-dimensional bundles (which define parastatistics) cannot be excluded, which render the configuration space approach incomplete. Physically, however, we show that the corresponding particle states may always be realized in terms of bosons and/or fermions with an unobserved internal degree of freedom (although based on non-relativistic quantum mechanics, this conclusion is analogous to the rigorous results of the Doplicher-Haag-Roberts analysis in algebraic quantum field theory, as well as to the heuristic arguments which led Gell-Mann and others to QCD (i.e. Quantum Chromodynamics)).

  15. Analysis of Asymmetry by a Slide-Vector.

    ERIC Educational Resources Information Center

    Zielman, Berrie; Heiser, Willem J.

    1993-01-01

    An algorithm based on the majorization theory of J. de Leeuw and W. J. Heiser is presented for fitting the slide-vector model. It views the model as a constrained version of the unfolding model. A three-way variant is proposed, and two examples from market structure analysis are presented. (SLD)

  16. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  17. A finite-temperature Hartree-Fock code for shell-model Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bertsch, G. F.; Mehlhaff, J. M.

    2016-10-01

    The codes HFgradZ.py and HFgradT.py find axially symmetric minima of a Hartree-Fock energy functional for a Hamiltonian supplied in a shell model basis. The functional to be minimized is the Hartree-Fock energy for zero-temperature properties or the Hartree-Fock grand potential for finite-temperature properties (thermal energy, entropy). The minimization may be subjected to additional constraints besides axial symmetry and nucleon numbers. A single-particle operator can be used to constrain the minimization by adding it to the single-particle Hamiltonian with a Lagrange multiplier. One can also constrain its expectation value in the zero-temperature code. Also the orbital filling can be constrained in the zero-temperature code, fixing the number of nucleons having given Kπ quantum numbers. This is particularly useful to resolve near-degeneracies among distinct minima.

  18. Statistical analogues of thermodynamic extremum principles

    NASA Astrophysics Data System (ADS)

    Ramshaw, John D.

    2018-05-01

    As shown by Jaynes, the canonical and grand canonical probability distributions of equilibrium statistical mechanics can be simply derived from the principle of maximum entropy, in which the statistical entropy S=- {k}{{B}}{\\sum }i{p}i{log}{p}i is maximised subject to constraints on the mean values of the energy E and/or number of particles N in a system of fixed volume V. The Lagrange multipliers associated with those constraints are then found to be simply related to the temperature T and chemical potential μ. Here we show that the constrained maximisation of S is equivalent to, and can therefore be replaced by, the essentially unconstrained minimisation of the obvious statistical analogues of the Helmholtz free energy F = E ‑ TS and the grand potential J = F ‑ μN. Those minimisations are more easily performed than the maximisation of S because they formally eliminate the constraints on the mean values of E and N and their associated Lagrange multipliers. This procedure significantly simplifies the derivation of the canonical and grand canonical probability distributions, and shows that the well known extremum principles for the various thermodynamic potentials possess natural statistical analogues which are equivalent to the constrained maximisation of S.

  19. Note on the Noether charge and holographic transports

    NASA Astrophysics Data System (ADS)

    Fan, Zhong-Ying

    2018-03-01

    We clarify the relation between the Noether charge associated to an arbitrary vector field and the equations of motion by revisiting Wald formalism. For a timelike Killing vector, aspects of the Noether charge suggest that it is dual to the heat current in the boundary for general holographic theories. For a spacelike Killing vector, we interpret the Noether charge (at the transverse direction) as shear stress of the dual fluid so we can compute the ratio of shear viscosity to entropy density by simply using the infrared data on the black hole event horizon. We test the new method for Einstein gravity and Gauss-Bonnet gravity and find that it produces correct results for both cases even in the presence of additional matter fields.

  20. Computation and analysis for a constrained entropy optimization problem in finance

    NASA Astrophysics Data System (ADS)

    He, Changhong; Coleman, Thomas F.; Li, Yuying

    2008-12-01

    In [T. Coleman, C. He, Y. Li, Calibrating volatility function bounds for an uncertain volatility model, Journal of Computational Finance (2006) (submitted for publication)], an entropy minimization formulation has been proposed to calibrate an uncertain volatility option pricing model (UVM) from market bid and ask prices. To avoid potential infeasibility due to numerical error, a quadratic penalty function approach is applied. In this paper, we show that the solution to the quadratic penalty problem can be obtained by minimizing an objective function which can be evaluated via solving a Hamilton-Jacobian-Bellman (HJB) equation. We prove that the implicit finite difference solution of this HJB equation converges to its viscosity solution. In addition, we provide computational examples illustrating accuracy of calibration.

  1. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  2. A robust hidden Markov Gauss mixture vector quantizer for a noisy source.

    PubMed

    Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M

    2009-07-01

    Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.

  3. Poisson traces, D-modules, and symplectic resolutions

    NASA Astrophysics Data System (ADS)

    Etingof, Pavel; Schedler, Travis

    2018-03-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  4. Poisson traces, D-modules, and symplectic resolutions.

    PubMed

    Etingof, Pavel; Schedler, Travis

    2018-01-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  5. Constraints on models of scalar and vector leptoquarks decaying to a quark and a neutrino at $$\\sqrt{s}=$$ 13 TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sirunyan, Albert M; et al.

    The results of a previous search by the CMS Collaboration for squarks and gluinos are reinterpreted to constrain models of leptoquark (LQ) production. The search considers jets in association with a transverse momentum imbalance, using themore » $$M_\\mathrm{T2}$$ variable. The analysis uses proton-proton collision data at $$\\sqrt{s}=$$ 13 TeV, recorded with the CMS detector at the LHC in 2016 and corresponding to an integrated luminosity of 35.9 fb$$^{-1}$$. Leptoquark pair production is considered with LQ decays to a neutrino and a top, bottom, or light quark. This reinterpretation considers higher mass values than the original CMS search to constrain both scalar and vector LQs. Limits on the cross section for LQ pair production are derived at the 95% confidence level depending on the LQ decay mode. A vector LQ decaying with a 50% branching fraction to t$$\

  6. Quantum entanglement of a harmonic oscillator with an electromagnetic field.

    PubMed

    Makarov, Dmitry N

    2018-05-29

    At present, there are many methods for obtaining quantum entanglement of particles with an electromagnetic field. Most methods have a low probability of quantum entanglement and not an exact theoretical apparatus based on an approximate solution of the Schrodinger equation. There is a need for new methods for obtaining quantum-entangled particles and mathematically accurate studies of such methods. In this paper, a quantum harmonic oscillator (for example, an electron in a magnetic field) interacting with a quantized electromagnetic field is considered. Based on the exact solution of the Schrodinger equation for this system, it is shown that for certain parameters there can be a large quantum entanglement between the electron and the electromagnetic field. Quantum entanglement is analyzed on the basis of a mathematically exact expression for the Schmidt modes and the Von Neumann entropy.

  7. Thermodynamic view on decision-making process: emotions as a potential power vector of realization of the choice.

    PubMed

    Pakhomov, Anton; Sudin, Natalya

    2013-12-01

    This research is devoted to possible mechanisms of decision-making in frames of thermodynamic principles. It is also shown that the decision-making system in reply to emotion includes vector component which seems to be often a necessary condition to transfer system from one state to another. The phases of decision-making system can be described as supposed to be nonequilibrium and irreversible to which thermodynamics laws are applied. The mathematical model of a decision choice, proceeding from principles of the nonlinear dynamics considering instability of movement and bifurcation is offered. The thermodynamic component of decision-making process on the basis of vector transfer of energy induced by emotion at the given time is surveyed. It is proposed a three-modular model of decision making based on principles of thermodynamics. Here it is suggested that at entropy impact due to effect of emotion, on the closed system-the human brain,-initially arises chaos, then after fluctuations of possible alternatives which were going on-reactions of brain zones in reply to external influence, an order is forming and there is choice of alternatives, according to primary entrance conditions and a state of the closed system. Entropy calculation of a choice expectation of negative and positive emotion shows judgment possibility of existence of "the law of emotion conservation" in accordance with several experimental data.

  8. Black holes in vector-tensor theories and their thermodynamics

    NASA Astrophysics Data System (ADS)

    Fan, Zhong-Ying

    2018-01-01

    In this paper, we study Einstein gravity either minimally or non-minimally coupled to a vector field which breaks the gauge symmetry explicitly in general dimensions. We first consider a minimal theory which is simply the Einstein-Proca theory extended with a quartic self-interaction term for the vector field. We obtain its general static maximally symmetric black hole solution and study the thermodynamics using Wald formalism. The aspects of the solution are much like a Reissner-Nordstrøm black hole in spite of that a global charge cannot be defined for the vector. For non-minimal theories, we obtain a lot of exact black hole solutions, depending on the parameters of the theories. In particular, many of the solutions are general static and have maximal symmetry. However, there are some subtleties and ambiguities in the derivation of the first laws because the existence of an algebraic degree of freedom of the vector in general invalids the Wald entropy formula. The thermodynamics of these solutions deserves further studies.

  9. Face biometrics with renewable templates

    NASA Astrophysics Data System (ADS)

    van der Veen, Michiel; Kevenaar, Tom; Schrijen, Geert-Jan; Akkermans, Ton H.; Zuo, Fei

    2006-02-01

    In recent literature, privacy protection technologies for biometric templates were proposed. Among these is the so-called helper-data system (HDS) based on reliable component selection. In this paper we integrate this approach with face biometrics such that we achieve a system in which the templates are privacy protected, and multiple templates can be derived from the same facial image for the purpose of template renewability. Extracting binary feature vectors forms an essential step in this process. Using the FERET and Caltech databases, we show that this quantization step does not significantly degrade the classification performance compared to, for example, traditional correlation-based classifiers. The binary feature vectors are integrated in the HDS leading to a privacy protected facial recognition algorithm with acceptable FAR and FRR, provided that the intra-class variation is sufficiently small. This suggests that a controlled enrollment procedure with a sufficient number of enrollment measurements is required.

  10. Detecting double compression of audio signal

    NASA Astrophysics Data System (ADS)

    Yang, Rui; Shi, Yun Q.; Huang, Jiwu

    2010-01-01

    MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.

  11. Conditional symmetries in axisymmetric quantum cosmologies with scalar fields and the fate of the classical singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zampeli, Adamantia; Pailas, Theodoros; Terzis, Petros A.

    2016-05-01

    In this paper, the classical and quantum solutions of some axisymmetric cosmologies coupled to a massless scalar field are studied in the context of minisuperspace approximation. In these models, the singular nature of the Lagrangians entails a search for possible conditional symmetries. These have been proven to be the simultaneous conformal symmetries of the supermetric and the superpotential. The quantization is performed by adopting the Dirac proposal for constrained systems, i.e. promoting the first-class constraints to operators annihilating the wave function. To further enrich the approach, we follow [1] and impose the operators related to the classical conditional symmetries onmore » the wave function. These additional equations select particular solutions of the Wheeler-DeWitt equation. In order to gain some physical insight from the quantization of these cosmological systems, we perform a semiclassical analysis following the Bohmian approach to quantum theory. The generic result is that, in all but one model, one can find appropriate ranges of the parameters, so that the emerging semiclassical geometries are non-singular. An attempt for physical interpretation involves the study of the effective energy-momentum tensor which corresponds to an imperfect fluid.« less

  12. On the constrained B-type Kadomtsev-Petviashvili hierarchy: Hirota bilinear equations and Virasoro symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Hsin-Fu; Tu, Ming-Hsien

    2011-03-15

    We derive the bilinear equations of the constrained BKP hierarchy from the calculus of pseudodifferential operators. The full hierarchy equations can be expressed in Hirota's bilinear form characterized by the functions {rho}, {sigma}, and {tau}. Besides, we also give a modification of the original Orlov-Schulman additional symmetry to preserve the constrained form of the Lax operator for this hierarchy. The vector fields associated with the modified additional symmetry turn out to satisfy a truncated centerless Virasoro algebra.

  13. Solution of a Complex Least Squares Problem with Constrained Phase.

    PubMed

    Bydder, Mark

    2010-12-30

    The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.

  14. Constrained multibody system dynamics: An automated approach

    NASA Technical Reports Server (NTRS)

    Kamman, J. W.; Huston, R. L.

    1982-01-01

    The governing equations for constrained multibody systems are formulated in a manner suitable for their automated, numerical development and solution. The closed loop problem of multibody chain systems is addressed. The governing equations are developed by modifying dynamical equations obtained from Lagrange's form of d'Alembert's principle. The modifications is based upon a solution of the constraint equations obtained through a zero eigenvalues theorem, is a contraction of the dynamical equations. For a system with n-generalized coordinates and m-constraint equations, the coefficients in the constraint equations may be viewed as constraint vectors in n-dimensional space. In this setting the system itself is free to move in the n-m directions which are orthogonal to the constraint vectors.

  15. Exact symmetries in the velocity fluctuations of a hot Brownian swimmer

    NASA Astrophysics Data System (ADS)

    Falasco, Gianmaria; Pfaller, Richard; Bregulla, Andreas P.; Cichos, Frank; Kroy, Klaus

    2016-09-01

    Symmetries constrain dynamics. We test this fundamental physical principle, experimentally and by molecular dynamics simulations, for a hot Janus swimmer operating far from thermal equilibrium. Our results establish scalar and vectorial steady-state fluctuation theorems and a thermodynamic uncertainty relation that link the fluctuating particle current to its entropy production at an effective temperature. A Markovian minimal model elucidates the underlying nonequilibrium physics.

  16. Heavy fields and gravity

    NASA Astrophysics Data System (ADS)

    Goon, Garrett

    2017-01-01

    We study the effects of heavy fields on 4D spacetimes with flat, de Sitter and anti-de Sitter asymptotics. At low energies, matter generates specific, calculable higher derivative corrections to the GR action which perturbatively alter the Schwarzschild-( A) dS family of solutions. The effects of massive scalars, Dirac spinors and gauge fields are each considered. The six-derivative operators they produce, such as ˜ R 3 terms, generate the leading corrections. The induced changes to horizon radii, Hawking temperatures and entropies are found. Modifications to the energy of large AdS black holes are derived by imposing the first law. An explicit demonstration of the replica trick is provided, as it is used to derive black hole and cosmological horizon entropies. Considering entropy bounds, it's found that scalars and fermions increase the entropy one can store inside a region bounded by a sphere of fixed size, but vectors lead to a decrease, oddly. We also demonstrate, however, that many of the corrections fall below the resolving power of the effective field theory and are therefore untrustworthy. Defining properties of black holes, such as the horizon area and Hawking temperature, prove to be remarkably robust against higher derivative gravitational corrections.

  17. Multichannel interictal spike activity detection using time-frequency entropy measure.

    PubMed

    Thanaraj, Palani; Parvathavarthini, B

    2017-06-01

    Localization of interictal spikes is an important clinical step in the pre-surgical assessment of pharmacoresistant epileptic patients. The manual selection of interictal spike periods is cumbersome and involves a considerable amount of analysis workload for the physician. The primary focus of this paper is to automate the detection of interictal spikes for clinical applications in epilepsy localization. The epilepsy localization procedure involves detection of spikes in a multichannel EEG epoch. Therefore, a multichannel Time-Frequency (T-F) entropy measure is proposed to extract features related to the interictal spike activity. Least squares support vector machine is used to train the proposed feature to classify the EEG epochs as either normal or interictal spike period. The proposed T-F entropy measure, when validated with epilepsy dataset of 15 patients, shows an interictal spike classification accuracy of 91.20%, sensitivity of 100% and specificity of 84.23%. Moreover, the area under the curve of Receiver Operating Characteristics plot of 0.9339 shows the superior classification performance of the proposed T-F entropy measure. The results of this paper show a good spike detection accuracy without any prior information about the spike morphology.

  18. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems

    PubMed Central

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726

  19. A fault diagnosis scheme for rolling bearing based on local mean decomposition and improved multiscale fuzzy entropy

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu

    2016-01-01

    This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.

  20. Three learning phases for radial-basis-function networks.

    PubMed

    Schwenker, F; Kestler, H A; Palm, G

    2001-05-01

    In this paper, learning algorithms for radial basis function (RBF) networks are discussed. Whereas multilayer perceptrons (MLP) are typically trained with backpropagation algorithms, starting the training procedure with a random initialization of the MLP's parameters, an RBF network may be trained in many different ways. We categorize these RBF training methods into one-, two-, and three-phase learning schemes. Two-phase RBF learning is a very common learning scheme. The two layers of an RBF network are learnt separately; first the RBF layer is trained, including the adaptation of centers and scaling parameters, and then the weights of the output layer are adapted. RBF centers may be trained by clustering, vector quantization and classification tree algorithms, and the output layer by supervised learning (through gradient descent or pseudo inverse solution). Results from numerical experiments of RBF classifiers trained by two-phase learning are presented in three completely different pattern recognition applications: (a) the classification of 3D visual objects; (b) the recognition hand-written digits (2D objects); and (c) the categorization of high-resolution electrocardiograms given as a time series (ID objects) and as a set of features extracted from these time series. In these applications, it can be observed that the performance of RBF classifiers trained with two-phase learning can be improved through a third backpropagation-like training phase of the RBF network, adapting the whole set of parameters (RBF centers, scaling parameters, and output layer weights) simultaneously. This, we call three-phase learning in RBF networks. A practical advantage of two- and three-phase learning in RBF networks is the possibility to use unlabeled training data for the first training phase. Support vector (SV) learning in RBF networks is a different learning approach. SV learning can be considered, in this context of learning, as a special type of one-phase learning, where only the output layer weights of the RBF network are calculated, and the RBF centers are restricted to be a subset of the training data. Numerical experiments with several classifier schemes including k-nearest-neighbor, learning vector quantization and RBF classifiers trained through two-phase, three-phase and support vector learning are given. The performance of the RBF classifiers trained through SV learning and three-phase learning are superior to the results of two-phase learning, but SV learning often leads to complex network structures, since the number of support vectors is not a small fraction of the total number of data points.

  1. Geometric effects resulting from square and circular confinements for a particle constrained to a space curve

    NASA Astrophysics Data System (ADS)

    Wang, Yong-Long; Lai, Meng-Yun; Wang, Fan; Zong, Hong-Shi; Chen, Yan-Feng

    2018-04-01

    Investigating the geometric effects resulting from the detailed behaviors of the confining potential, we consider square and circular confinements to constrain a particle to a space curve. We find a torsion-induced geometric potential and a curvature-induced geometric momentum just in the square case, while a geometric gauge potential solely in the circular case. In the presence of electromagnetic field, a geometrically induced magnetic moment couples with magnetic field as an induced Zeeman coupling only for the circular confinement also. As spin-orbit interaction is considered, we find some additional terms for the spin-orbit coupling, which are induced not only by torsion, but also curvature. Moreover, in the circular case, the spin also couples with an intrinsic angular momentum, which describes the azimuthal motions mapped on the space curve. As an important conclusion for the thin-layer quantization approach, some substantial geometric effects result from the confinement boundaries. Finally, these results are proved on a helical wire.

  2. Discrimination of coherent features in turbulent boundary layers by the entropy method

    NASA Technical Reports Server (NTRS)

    Corke, T. C.; Guezennec, Y. G.

    1984-01-01

    Entropy in information theory is defined as the expected or mean value of the measure of the amount of self-information contained in the ith point of a distribution series x sub i, based on its probability of occurrence p(x sub i). If p(x sub i) is the probability of the ith state of the system in probability space, then the entropy, E(X) = - sigma p(x sub i) logp (x sub i), is a measure of the disorder in the system. Based on this concept, a method was devised which sought to minimize the entropy in a time series in order to construct the signature of the most coherent motions. The constrained minimization was performed using a Lagrange multiplier approach which resulted in the solution of a simultaneous set of non-linear coupled equations to obtain the coherent time series. The application of the method to space-time data taken by a rake of sensors in the near-wall region of a turbulent boundary layer was presented. The results yielded coherent velocity motions made up of locally decelerated or accelerated fluid having a streamwise scale of approximately 100 nu/u(tau), which is in qualitative agreement with the results from other less objective discrimination methods.

  3. Classifying epileptic EEG signals with delay permutation entropy and Multi-Scale K-means.

    PubMed

    Zhu, Guohun; Li, Yan; Wen, Peng Paul; Wang, Shuaifang

    2015-01-01

    Most epileptic EEG classification algorithms are supervised and require large training datasets, that hinder their use in real time applications. This chapter proposes an unsupervised Multi-Scale K-means (MSK-means) MSK-means algorithm to distinguish epileptic EEG signals and identify epileptic zones. The random initialization of the K-means algorithm can lead to wrong clusters. Based on the characteristics of EEGs, the MSK-means MSK-means algorithm initializes the coarse-scale centroid of a cluster with a suitable scale factor. In this chapter, the MSK-means algorithm is proved theoretically superior to the K-means algorithm on efficiency. In addition, three classifiers: the K-means, MSK-means MSK-means and support vector machine (SVM), are used to identify seizure and localize epileptogenic zone using delay permutation entropy features. The experimental results demonstrate that identifying seizure with the MSK-means algorithm and delay permutation entropy achieves 4. 7 % higher accuracy than that of K-means, and 0. 7 % higher accuracy than that of the SVM.

  4. EEG-Based Computer Aided Diagnosis of Autism Spectrum Disorder Using Wavelet, Entropy, and ANN

    PubMed Central

    AlSharabi, Khalil; Ibrahim, Sutrisno; Alsuwailem, Abdullah

    2017-01-01

    Autism spectrum disorder (ASD) is a type of neurodevelopmental disorder with core impairments in the social relationships, communication, imagination, or flexibility of thought and restricted repertoire of activity and interest. In this work, a new computer aided diagnosis (CAD) of autism ‎based on electroencephalography (EEG) signal analysis is investigated. The proposed method is based on discrete wavelet transform (DWT), entropy (En), and artificial neural network (ANN). DWT is used to decompose EEG signals into approximation and details coefficients to obtain EEG subbands. The feature vector is constructed by computing Shannon entropy values from each EEG subband. ANN classifies the corresponding EEG signal into normal or autistic based on the extracted features. The experimental results show the effectiveness of the proposed method for assisting autism diagnosis. A receiver operating characteristic (ROC) curve metric is used to quantify the performance of the proposed method. The proposed method obtained promising results tested using real dataset provided by King Abdulaziz Hospital, Jeddah, Saudi Arabia. PMID:28484720

  5. Distribution of the Habitat Suitability of the Main Malaria Vector in French Guiana Using Maximum Entropy Modeling.

    PubMed

    Moua, Yi; Roux, Emmanuel; Girod, Romain; Dusfour, Isabelle; de Thoisy, Benoit; Seyler, Frédérique; Briolant, Sébastien

    2017-05-01

    Malaria is an important health issue in French Guiana. Its principal mosquito vector in this region is Anopheles darlingi Root. Knowledge of the spatial distribution of this species is still very incomplete due to the extent of French Guiana and the difficulty to access most of the territory. Species distribution modeling based on the maximal entropy procedure was used to predict the spatial distribution of An. darlingi using 39 presence sites. The resulting model provided significantly high prediction performances (mean 10-fold cross-validated partial area under the curve and continuous Boyce index equal to, respectively, 1.11-with a level of omission error of 20%-and 0.42). The model also provided a habitat suitability map and environmental response curves in accordance with the known entomological situation. Several environmental characteristics that had a positive correlation with the presence of An. darlingi were highlighted: nonpermanent anthropogenic changes of the natural environment, the presence of roads and tracks, and opening of the forest. Some geomorphological landforms and high altitude landscapes appear to be unsuitable for An. darlingi. The species distribution modeling was able to reliably predict the distribution of suitable habitats for An. darlingi in French Guiana. Results allowed completion of the knowledge of the spatial distribution of the principal malaria vector in this Amazonian region, and identification of the main factors that favor its presence. They should contribute to the definition of a necessary targeted vector control strategy in a malaria pre-elimination stage, and allow extrapolation of the acquired knowledge to other Amazonian or malaria-endemic contexts. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  7. Halo-independence with quantified maximum entropy at DAMA/LIBRA

    NASA Astrophysics Data System (ADS)

    Fowlie, Andrew

    2017-10-01

    Using the DAMA/LIBRA anomaly as an example, we formalise the notion of halo-independence in the context of Bayesian statistics and quantified maximum entropy. We consider an infinite set of possible profiles, weighted by an entropic prior and constrained by a likelihood describing noisy measurements of modulated moments by DAMA/LIBRA. Assuming an isotropic dark matter (DM) profile in the galactic rest frame, we find the most plausible DM profiles and predictions for unmodulated signal rates at DAMA/LIBRA. The entropic prior contains an a priori unknown regularisation factor, β, that describes the strength of our conviction that the profile is approximately Maxwellian. By varying β, we smoothly interpolate between a halo-independent and a halo-dependent analysis, thus exploring the impact of prior information about the DM profile.

  8. Memory-efficient decoding of LDPC codes

    NASA Technical Reports Server (NTRS)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  9. Convex foundations for generalized MaxEnt models

    NASA Astrophysics Data System (ADS)

    Frongillo, Rafael; Reid, Mark D.

    2014-12-01

    We present an approach to maximum entropy models that highlights the convex geometry and duality of generalized exponential families (GEFs) and their connection to Bregman divergences. Using our framework, we are able to resolve a puzzling aspect of the bijection of Banerjee and coauthors between classical exponential families and what they call regular Bregman divergences. Their regularity condition rules out all but Bregman divergences generated from log-convex generators. We recover their bijection and show that a much broader class of divergences correspond to GEFs via two key observations: 1) Like classical exponential families, GEFs have a "cumulant" C whose subdifferential contains the mean: Eo˜pθ[φ(o)]∈∂C(θ) ; 2) Generalized relative entropy is a C-Bregman divergence between parameters: DF(pθ,pθ')= D C(θ,θ') , where DF becomes the KL divergence for F = -H. We also show that every incomplete market with cost function C can be expressed as a complete market, where the prices are constrained to be a GEF with cumulant C. This provides an entirely new interpretation of prediction markets, relating their design back to the principle of maximum entropy.

  10. Human vision is determined based on information theory.

    PubMed

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-11-03

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.

  11. Human vision is determined based on information theory

    NASA Astrophysics Data System (ADS)

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-11-01

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.

  12. Natural convection of a two-dimensional Boussinesq fluid does not maximize entropy production.

    PubMed

    Bartlett, Stuart; Bullock, Seth

    2014-08-01

    Rayleigh-Bénard convection is a canonical example of spontaneous pattern formation in a nonequilibrium system. It has been the subject of considerable theoretical and experimental study, primarily for systems with constant (temperature or heat flux) boundary conditions. In this investigation, we have explored the behavior of a convecting fluid system with negative feedback boundary conditions. At the upper and lower system boundaries, the inward heat flux is defined such that it is a decreasing function of the boundary temperature. Thus the system's heat transport is not constrained in the same manner that it is in the constant temperature or constant flux cases. It has been suggested that the entropy production rate (which has a characteristic peak at intermediate heat flux values) might apply as a selection rule for such a system. In this work, we demonstrate with Lattice Boltzmann simulations that entropy production maximization does not dictate the steady state of this system, despite its success in other, somewhat similar scenarios. Instead, we will show that the same scaling law of dimensionless variables found for constant boundary conditions also applies to this system.

  13. Human vision is determined based on information theory

    PubMed Central

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-01-01

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition. PMID:27808236

  14. Representing and computing regular languages on massively parallel networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, M.I.; O'Sullivan, J.A.; Boysam, B.

    1991-01-01

    This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochasticmore » diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.« less

  15. Information gains from cosmic microwave background experiments

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Amara, Adam; Refregier, Alexandre; Paranjape, Aseem; Akeret, Joël

    2014-07-01

    To shed light on the fundamental problems posed by dark energy and dark matter, a large number of experiments have been performed and combined to constrain cosmological models. We propose a novel way of quantifying the information gained by updates on the parameter constraints from a series of experiments which can either complement earlier measurements or replace them. For this purpose, we use the Kullback-Leibler divergence or relative entropy from information theory to measure differences in the posterior distributions in model parameter space from a pair of experiments. We apply this formalism to a historical series of cosmic microwave background experiments ranging from Boomerang to WMAP, SPT, and Planck. Considering different combinations of these experiments, we thus estimate the information gain in units of bits and distinguish contributions from the reduction of statistical errors and the "surprise" corresponding to a significant shift of the parameters' central values. For this experiment series, we find individual relative entropy gains ranging from about 1 to 30 bits. In some cases, e.g. when comparing WMAP and Planck results, we find that the gains are dominated by the surprise rather than by improvements in statistical precision. We discuss how this technique provides a useful tool for both quantifying the constraining power of data from cosmological probes and detecting the tensions between experiments.

  16. Predicting the potential distribution of main malaria vectors Anopheles stephensi, An. culicifacies s.l. and An. fluviatilis s.l. in Iran based on maximum entropy model.

    PubMed

    Pakdad, Kamran; Hanafi-Bojd, Ahmad Ali; Vatandoost, Hassan; Sedaghat, Mohammad Mehdi; Raeisi, Ahmad; Moghaddam, Abdolreza Salahi; Foroushani, Abbas Rahimi

    2017-05-01

    Malaria is considered as a major public health problem in southern areas of Iran. The goal of this study was to predict best ecological niches of three main malaria vectors of Iran: Anopheles stephensi, Anopheles culicifacies s.l. and Anopheles fluviatilis s.l. A databank was created which included all published data about Anopheles species of Iran from 1961 to 2015. The suitable environmental niches for the three above mentioned Anopheles species were predicted using maximum entropy model (MaxEnt). AUC (area under Roc curve) values were 0.943, 0.974 and 0.956 for An. stephensi, An. culicifacies s.l. and An. fluviatilis s.l respectively, which are considered as high potential power of model in the prediction of species niches. The biggest bioclimatic contributor for An. stephensi and An. fluviatilis s.l. was bio 15 (precipitation seasonality), 25.5% and 36.1% respectively, followed by bio 1 (annual mean temperature), 20.8% for An. stephensi and bio 4 (temperature seasonality) with 49.4% contribution for An. culicifacies s.l. This is the first step in the mapping of the country's malaria vectors. Hence, future weather situation can change the dispersal maps of Anopheles. Iran is under elimination phase of malaria, so that such spatio-temporal studies are essential and could provide guideline for decision makers for IVM strategies in problematic areas. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Reducing the cost of using collocation to compute vibrational energy levels: Results for CH2NH.

    PubMed

    Avila, Gustavo; Carrington, Tucker

    2017-08-14

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in the work of Avila and Carrington, Jr. [J. Chem. Phys. 143, 214108 (2015)]. Known quadrature and collocation methods using a Smolyak grid require storing intermediate vectors with more elements than points on the Smolyak grid. This is due to the fact that grid labels are constrained among themselves and basis labels are constrained among themselves. We show that by using the so-called hierarchical basis functions, one can significantly reduce the memory required. In this paper, the intermediate vectors have only as many elements as the Smolyak grid. The ideas are tested by computing energy levels of CH 2 NH.

  18. Pseudoscalar portal dark matter and new signatures of vector-like fermions

    DOE PAGES

    Fan, JiJi; Koushiappas, Savvas M.; Landsberg, Greg

    2016-01-19

    Fermionic dark matter interacting with the Standard Model sector through a pseudoscalar portal could evade the direct detection constraints while preserving a WIMP miracle. Here, we study the LHC constraints on the pseudoscalar production in simplified models with the pseudoscalar either dominantly coupled to b quarks ormore » $${{\\tau}}$$ leptons and explore their implications for the GeV excesses in gamma ray observations. We also investigate models with new vector-like fermions that could realize the simplfied models of pseudoscalar portal dark matter. Furthermore, these models yield new decay channels and signatures of vector-like fermions, for instance, bbb; b$${{\\tau}}$$ $${{\\tau}}$$, and $${{\\tau}}$$ $${{\\tau}}$$ $${{\\tau}}$$ resonances. Some of the signatures have already been strongly constrained by the existing LHC searches and the parameter space fitting the gamma ray excess is further restricted. Conversely, the pure $${{\\tau}}$$-rich final state is only weakly constrained so far due to the small electroweak production rate.« less

  19. A Novel Bearing Multi-Fault Diagnosis Approach Based on Weighted Permutation Entropy and an Improved SVM Ensemble Classifier.

    PubMed

    Zhou, Shenghan; Qian, Silin; Chang, Wenbing; Xiao, Yiyong; Cheng, Yang

    2018-06-14

    Timely and accurate state detection and fault diagnosis of rolling element bearings are very critical to ensuring the reliability of rotating machinery. This paper proposes a novel method of rolling bearing fault diagnosis based on a combination of ensemble empirical mode decomposition (EEMD), weighted permutation entropy (WPE) and an improved support vector machine (SVM) ensemble classifier. A hybrid voting (HV) strategy that combines SVM-based classifiers and cloud similarity measurement (CSM) was employed to improve the classification accuracy. First, the WPE value of the bearing vibration signal was calculated to detect the fault. Secondly, if a bearing fault occurred, the vibration signal was decomposed into a set of intrinsic mode functions (IMFs) by EEMD. The WPE values of the first several IMFs were calculated to form the fault feature vectors. Then, the SVM ensemble classifier was composed of binary SVM and the HV strategy to identify the bearing multi-fault types. Finally, the proposed model was fully evaluated by experiments and comparative studies. The results demonstrate that the proposed method can effectively detect bearing faults and maintain a high accuracy rate of fault recognition when a small number of training samples are available.

  20. An Alternative to the Gauge Theoretic Setting

    NASA Astrophysics Data System (ADS)

    Schroer, Bert

    2011-10-01

    The standard formulation of quantum gauge theories results from the Lagrangian (functional integral) quantization of classical gauge theories. A more intrinsic quantum theoretical access in the spirit of Wigner's representation theory shows that there is a fundamental clash between the pointlike localization of zero mass (vector, tensor) potentials and the Hilbert space (positivity, unitarity) structure of QT. The quantization approach has no other way than to stay with pointlike localization and sacrifice the Hilbert space whereas the approach built on the intrinsic quantum concept of modular localization keeps the Hilbert space and trades the conflict creating pointlike generation with the tightest consistent localization: semiinfinite spacelike string localization. Whereas these potentials in the presence of interactions stay quite close to associated pointlike field strengths, the interacting matter fields to which they are coupled bear the brunt of the nonlocal aspect in that they are string-generated in a way which cannot be undone by any differentiation. The new stringlike approach to gauge theory also revives the idea of a Schwinger-Higgs screening mechanism as a deeper and less metaphoric description of the Higgs spontaneous symmetry breaking and its accompanying tale about "God's particle" and its mass generation for all the other particles.

  1. Optimized universal color palette design for error diffusion

    NASA Astrophysics Data System (ADS)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  2. Development of a good-quality speech coder for transmission over noisy channels at 2.4 kb/s

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Berouti, M.; Higgins, A.; Russell, W.

    1982-03-01

    This report describes the development, study, and experimental results of a 2.4 kb/s speech coder called harmonic deviations (HDV) vocoder, which transmits good-quality speech over noisy channels with bit-error rates of up to 1%. The HDV coder is based on the linear predictive coding (LPC) vocoder, and it transmits additional information over and above the data transmitted by the LPC vocoder, in the form of deviations between the speech spectrum and the LPC all-pole model spectrum at a selected set of frequencies. At the receiver, the spectral deviations are used to generate the excitation signal for the all-pole synthesis filter. The report describes and compares several methods for extracting the spectral deviations from the speech signal and for encoding them. To limit the bit-rate of the HDV coder to 2.4 kb/s the report discusses several methods including orthogonal transformation and minimum-mean-square-error scalar quantization of log area ratios, two-stage vector-scalar quantization, and variable frame rate transmission. The report also presents the results of speech-quality optimization of the HDV coder at 2.4 kb/s.

  3. Covariant open bosonic string field theory on multiple D-branes in the proper-time gauge

    NASA Astrophysics Data System (ADS)

    Lee, Taejin

    2017-12-01

    We construct a covariant open bosonic string field theory on multiple D-branes, which reduces to a non-Abelian group Yang-Mills gauge theory in the zero-slope limit. Making use of the first quantized open bosonic string in the proper time gauge, we convert the string amplitudes given by the Polyakov path integrals on string world sheets into those of the second quantized theory. The world sheet diagrams generated by the constructed open string field theory are planar in contrast to those of the Witten's cubic string field theory. However, the constructed string field theory is yet equivalent to the Witten's cubic string field theory. Having obtained planar diagrams, we may adopt the light-cone string field theory technique to calculate the multi-string scattering amplitudes with an arbitrary number of external strings. We examine in detail the three-string vertex diagram and the effective four-string vertex diagrams generated perturbatively by the three-string vertex at tree level. In the zero-slope limit, the string scattering amplitudes are identified precisely as those of non-Abelian Yang-Mills gauge theory if the external states are chosen to be massless vector particles.

  4. Metaplectic-c Quantomorphisms

    NASA Astrophysics Data System (ADS)

    Vaughan, Jennifer

    2015-03-01

    In the classical Kostant-Souriau prequantization procedure, the Poisson algebra of a symplectic manifold (M,ω) is realized as the space of infinitesimal quantomorphisms of the prequantization circle bundle. Robinson and Rawnsley developed an alternative to the Kostant-Souriau quantization process in which the prequantization circle bundle and metaplectic structure for (M,ω) are replaced by a metaplectic-c prequantization. They proved that metaplectic-c quantization can be applied to a larger class of manifolds than the classical recipe. This paper presents a definition for a metaplectic-c quantomorphism, which is a diffeomorphism of metaplectic-c prequantizations that preserves all of their structures. Since the structure of a metaplectic-c prequantization is more complicated than that of a circle bundle, we find that the definition must include an extra condition that does not have an analogue in the Kostant-Souriau case. We then define an infinitesimal quantomorphism to be a vector field whose flow consists of metaplectic-c quantomorphisms, and prove that the space of infinitesimal metaplectic-c quantomorphisms exhibits all of the same properties that are seen for the infinitesimal quantomorphisms of a prequantization circle bundle. In particular, this space is isomorphic to the Poisson algebra C^∞(M).

  5. Classical Field Theory and the Stress-Energy Tensor

    NASA Astrophysics Data System (ADS)

    Swanson, Mark S.

    2015-09-01

    This book is a concise introduction to the key concepts of classical field theory for beginning graduate students and advanced undergraduate students who wish to study the unifying structures and physical insights provided by classical field theory without dealing with the additional complication of quantization. In that regard, there are many important aspects of field theory that can be understood without quantizing the fields. These include the action formulation, Galilean and relativistic invariance, traveling and standing waves, spin angular momentum, gauge invariance, subsidiary conditions, fluctuations, spinor and vector fields, conservation laws and symmetries, and the Higgs mechanism, all of which are often treated briefly in a course on quantum field theory. The variational form of classical mechanics and continuum field theory are both developed in the time-honored graduate level text by Goldstein et al (2001). An introduction to classical field theory from a somewhat different perspective is available in Soper (2008). Basic classical field theory is often treated in books on quantum field theory. Two excellent texts where this is done are Greiner and Reinhardt (1996) and Peskin and Schroeder (1995). Green's function techniques are presented in Arfken et al (2013).

  6. A multistage motion vector processing method for motion-compensated frame interpolation.

    PubMed

    Huang, Ai- Mei; Nguyen, Truong Q

    2008-05-01

    In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.

  7. The Weighted Burgers Vector: a new quantity for constraining dislocation densities and types using electron backscatter diffraction on 2D sections through crystalline materials.

    PubMed

    Wheeler, J; Mariani, E; Piazolo, S; Prior, D J; Trimby, P; Drury, M R

    2009-03-01

    The Weighted Burgers Vector (WBV) is defined here as the sum, over all types of dislocations, of [(density of intersections of dislocation lines with a map) x (Burgers vector)]. Here we show that it can be calculated, for any crystal system, solely from orientation gradients in a map view, unlike the full dislocation density tensor, which requires gradients in the third dimension. No assumption is made about gradients in the third dimension and they may be non-zero. The only assumption involved is that elastic strains are small so the lattice distortion is entirely due to dislocations. Orientation gradients can be estimated from gridded orientation measurements obtained by EBSD mapping, so the WBV can be calculated as a vector field on an EBSD map. The magnitude of the WBV gives a lower bound on the magnitude of the dislocation density tensor when that magnitude is defined in a coordinate invariant way. The direction of the WBV can constrain the types of Burgers vectors of geometrically necessary dislocations present in the microstructure, most clearly when it is broken down in terms of lattice vectors. The WBV has three advantages over other measures of local lattice distortion: it is a vector and hence carries more information than a scalar quantity, it has an explicit mathematical link to the individual Burgers vectors of dislocations and, since it is derived via tensor calculus, it is not dependent on the map coordinate system. If a sub-grain wall is included in the WBV calculation, the magnitude of the WBV becomes dependent on the step size but its direction still carries information on the Burgers vectors in the wall. The net Burgers vector content of dislocations intersecting an area of a map can be simply calculated by an integration round the edge of that area, a method which is fast and complements point-by-point WBV calculations.

  8. Criteria for equality in two entropic inequalities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirokov, M. E., E-mail: msh@mi.ras.ru

    2014-07-31

    We obtain a simple criterion for local equality between the constrained Holevo capacity and the quantum mutual information of a quantum channel. This shows that the set of all states for which this equality holds is determined by the kernel of the channel (as a linear map). Applications to Bosonic Gaussian channels are considered. It is shown that for a Gaussian channel having no completely depolarizing components the above characteristics may coincide only at non-Gaussian mixed states and a criterion for the existence of such states is given. All the obtained results may be reformulated as conditions for equality betweenmore » the constrained Holevo capacity of a quantum channel and the input von Neumann entropy. Bibliography: 20 titles. (paper)« less

  9. Automated Detection of Driver Fatigue Based on AdaBoost Classifier with EEG Signals.

    PubMed

    Hu, Jianfeng

    2017-01-01

    Purpose: Driving fatigue has become one of the important causes of road accidents, there are many researches to analyze driver fatigue. EEG is becoming increasingly useful in the measuring fatigue state. Manual interpretation of EEG signals is impossible, so an effective method for automatic detection of EEG signals is crucial needed. Method: In order to evaluate the complex, unstable, and non-linear characteristics of EEG signals, four feature sets were computed from EEG signals, in which fuzzy entropy (FE), sample entropy (SE), approximate Entropy (AE), spectral entropy (PE), and combined entropies (FE + SE + AE + PE) were included. All these feature sets were used as the input vectors of AdaBoost classifier, a boosting method which is fast and highly accurate. To assess our method, several experiments including parameter setting and classifier comparison were conducted on 28 subjects. For comparison, Decision Trees (DT), Support Vector Machine (SVM) and Naive Bayes (NB) classifiers are used. Results: The proposed method (combination of FE and AdaBoost) yields superior performance than other schemes. Using FE feature extractor, AdaBoost achieves improved area (AUC) under the receiver operating curve of 0.994, error rate (ERR) of 0.024, Precision of 0.969, Recall of 0.984, F1 score of 0.976, and Matthews correlation coefficient (MCC) of 0.952, compared to SVM (ERR at 0.035, Precision of 0.957, Recall of 0.974, F1 score of 0.966, and MCC of 0.930 with AUC of 0.990), DT (ERR at 0.142, Precision of 0.857, Recall of 0.859, F1 score of 0.966, and MCC of 0.716 with AUC of 0.916) and NB (ERR at 0.405, Precision of 0.646, Recall of 0.434, F1 score of 0.519, and MCC of 0.203 with AUC of 0.606). It shows that the FE feature set and combined feature set outperform other feature sets. AdaBoost seems to have better robustness against changes of ratio of test samples for all samples and number of subjects, which might therefore aid in the real-time detection of driver fatigue through the classification of EEG signals. Conclusion: By using combination of FE features and AdaBoost classifier to detect EEG-based driver fatigue, this paper ensured confidence in exploring the inherent physiological mechanisms and wearable application.

  10. Automated Detection of Driver Fatigue Based on AdaBoost Classifier with EEG Signals

    PubMed Central

    Hu, Jianfeng

    2017-01-01

    Purpose: Driving fatigue has become one of the important causes of road accidents, there are many researches to analyze driver fatigue. EEG is becoming increasingly useful in the measuring fatigue state. Manual interpretation of EEG signals is impossible, so an effective method for automatic detection of EEG signals is crucial needed. Method: In order to evaluate the complex, unstable, and non-linear characteristics of EEG signals, four feature sets were computed from EEG signals, in which fuzzy entropy (FE), sample entropy (SE), approximate Entropy (AE), spectral entropy (PE), and combined entropies (FE + SE + AE + PE) were included. All these feature sets were used as the input vectors of AdaBoost classifier, a boosting method which is fast and highly accurate. To assess our method, several experiments including parameter setting and classifier comparison were conducted on 28 subjects. For comparison, Decision Trees (DT), Support Vector Machine (SVM) and Naive Bayes (NB) classifiers are used. Results: The proposed method (combination of FE and AdaBoost) yields superior performance than other schemes. Using FE feature extractor, AdaBoost achieves improved area (AUC) under the receiver operating curve of 0.994, error rate (ERR) of 0.024, Precision of 0.969, Recall of 0.984, F1 score of 0.976, and Matthews correlation coefficient (MCC) of 0.952, compared to SVM (ERR at 0.035, Precision of 0.957, Recall of 0.974, F1 score of 0.966, and MCC of 0.930 with AUC of 0.990), DT (ERR at 0.142, Precision of 0.857, Recall of 0.859, F1 score of 0.966, and MCC of 0.716 with AUC of 0.916) and NB (ERR at 0.405, Precision of 0.646, Recall of 0.434, F1 score of 0.519, and MCC of 0.203 with AUC of 0.606). It shows that the FE feature set and combined feature set outperform other feature sets. AdaBoost seems to have better robustness against changes of ratio of test samples for all samples and number of subjects, which might therefore aid in the real-time detection of driver fatigue through the classification of EEG signals. Conclusion: By using combination of FE features and AdaBoost classifier to detect EEG-based driver fatigue, this paper ensured confidence in exploring the inherent physiological mechanisms and wearable application. PMID:28824409

  11. Automated vector selection of SIVQ and parallel computing integration MATLAB™: Innovations supporting large-scale and high-throughput image analysis studies.

    PubMed

    Cheng, Jerome; Hipp, Jason; Monaco, James; Lucas, David R; Madabhushi, Anant; Balis, Ulysses J

    2011-01-01

    Spatially invariant vector quantization (SIVQ) is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector's sensitivity and specificity properties (typically by reviewing a resultant heat map). In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA) and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC) transfer function, with each assessment resulting in an associated area-under-the-curve (AUC) figure of merit. Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an additional effort directed towards attaining high-throughput capability for the SIVQ algorithm, we demonstrated the successful incorporation of it with the MATrix LABoratory (MATLAB™) application interface. The SIVQ algorithm is suitable for automated vector selection settings and high throughput computation.

  12. Analysis of the horizontal structure of a measurement and control geodetic network based on entropy

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2013-06-01

    The paper attempts to determine an optimum structure of a directional measurement and control network intended for investigating horizontal displacements. For this purpose it uses the notion of entropy as a logarithmical measure of probability of the state of a particular observation system. An optimum number of observations results from the difference of the entropy of the vector of parameters ΔHX̂ (x)corresponding to one extra observation. An increment of entropy interpreted as an increment of the amount of information about the state of the system determines the adoption or rejection of another extra observation to be carried out. W pracy podjęto próbę określenia optymalnej struktury sieci kierunkowej pomiarowo-kontrolnej przeznaczonej do badań przemieszczeń poziomych. W tym celu wykorzystano pojęcie entropii jako logarytmicznej miary prawdopodobieństwa stanu określonego układu obserwacyjnego. Optymalna liczba realizowanych obserwacji wynika z różnicy entropii wektora parametrów ΔHX̂ (x) odpowiadającej jednej obserwacji nadliczbowej. Przyrost entropii interpretowany jako przyrost objętości informacji na temat stanu układu decyduje o przyjęciu względnie odrzuceniu do realizacji kolejnej obserwacji nadliczbowej.

  13. Immirzi parameter and Noether charges in first order gravity

    NASA Astrophysics Data System (ADS)

    Durka, Remigiusz

    2012-02-01

    The framework of SO(3,2) constrained BF theory applied to gravity makes it possible to generalize formulas for gravitational diffeomorphic Noether charges (mass, angular momentum, and entropy). It extends Wald's approach to the case of first order gravity with a negative cosmological constant, the Holst modification and the topological terms (Nieh-Yan, Euler, and Pontryagin). Topological invariants play essential role contributing to the boundary terms in the regularization scheme for the asymptotically AdS spacetimes, so that the differentiability of the action is automatically secured. Intriguingly, it turns out that the black hole thermodynamics does not depend on the Immirzi parameter for the AdS-Schwarzschild, AdS-Kerr, and topological black holes, whereas a nontrivial modification appears for the AdS-Taub-NUT spacetime, where it impacts not only the entropy, but also the total mass.

  14. Halo-independence with quantified maximum entropy at DAMA/LIBRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fowlie, Andrew, E-mail: andrew.j.fowlie@googlemail.com

    2017-10-01

    Using the DAMA/LIBRA anomaly as an example, we formalise the notion of halo-independence in the context of Bayesian statistics and quantified maximum entropy. We consider an infinite set of possible profiles, weighted by an entropic prior and constrained by a likelihood describing noisy measurements of modulated moments by DAMA/LIBRA. Assuming an isotropic dark matter (DM) profile in the galactic rest frame, we find the most plausible DM profiles and predictions for unmodulated signal rates at DAMA/LIBRA. The entropic prior contains an a priori unknown regularisation factor, β, that describes the strength of our conviction that the profile is approximately Maxwellian.more » By varying β, we smoothly interpolate between a halo-independent and a halo-dependent analysis, thus exploring the impact of prior information about the DM profile.« less

  15. Mathematics of Quantization and Quantum Fields

    NASA Astrophysics Data System (ADS)

    Dereziński, Jan; Gérard, Christian

    2013-03-01

    Preface; 1. Vector spaces; 2. Operators in Hilbert spaces; 3. Tensor algebras; 4. Analysis in L2(Rd); 5. Measures; 6. Algebras; 7. Anti-symmetric calculus; 8. Canonical commutation relations; 9. CCR on Fock spaces; 10. Symplectic invariance of CCR in finite dimensions; 11. Symplectic invariance of the CCR on Fock spaces; 12. Canonical anti-commutation relations; 13. CAR on Fock spaces; 14. Orthogonal invariance of CAR algebras; 15. Clifford relations; 16. Orthogonal invariance of the CAR on Fock spaces; 17. Quasi-free states; 18. Dynamics of quantum fields; 19. Quantum fields on space-time; 20. Diagrammatics; 21. Euclidean approach for bosons; 22. Interacting bosonic fields; Subject index; Symbols index.

  16. Perceptual distortion analysis of color image VQ-based coding

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine

    1997-04-01

    It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.

  17. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  18. Theoretical nuclear physics

    NASA Astrophysics Data System (ADS)

    Rost, E.; Shephard, J. R.

    1992-08-01

    This report discusses the following topics: Exact 1-loop vacuum polarization effects in 1 + 1 dimensional QHD; exact 1-fermion loop contributions in 1 + 1 dimensional solitons; exact scalar 1-loop contributions in 1 + 3 dimensions; exact vacuum calculations in a hyper-spherical basis; relativistic nuclear matter with self-consistent correlation energy; consistent RHA-RPA for finite nuclei; transverse response functions in the (triangle)-resonance region; hadronic matter in a nontopological soliton model; scalar and vector contributions to (bar p)p yields (bar lambda)lambda reaction; 0+ and 2+ strengths in pion double-charge exchange to double giant-dipole resonances; and nucleons in a hybrid sigma model including a quantized pion field.

  19. Multi-rate, real time image compression for images dominated by point sources

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.; Harris, Richard W.

    1993-01-01

    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.

  20. Atomistic Free Energy Model for Nucleic Acids: Simulations of Single-Stranded DNA and the Entropy Landscape of RNA Stem-Loop Structures.

    PubMed

    Mak, Chi H

    2015-11-25

    While single-stranded (ss) segments of DNAs and RNAs are ubiquitous in biology, details about their structures have only recently begun to emerge. To study ssDNA and RNAs, we have developed a new Monte Carlo (MC) simulation using a free energy model for nucleic acids that has the atomisitic accuracy to capture fine molecular details of the sugar-phosphate backbone. Formulated on the basis of a first-principle calculation of the conformational entropy of the nucleic acid chain, this free energy model correctly reproduced both the long and short length-scale structural properties of ssDNA and RNAs in a rigorous comparison against recent data from fluorescence resonance energy transfer, small-angle X-ray scattering, force spectroscopy and fluorescence correlation transport measurements on sequences up to ∼100 nucleotides long. With this new MC algorithm, we conducted a comprehensive investigation of the entropy landscape of small RNA stem-loop structures. From a simulated ensemble of ∼10(6) equilibrium conformations, the entropy for the initiation of different size RNA hairpin loops was computed and compared against thermodynamic measurements. Starting from seeded hairpin loops, constrained MC simulations were then used to estimate the entropic costs associated with propagation of the stem. The numerical results provide new direct molecular insights into thermodynaimc measurement from macroscopic calorimetry and melting experiments.

  1. Chromosome preference of disease genes and vectorization for the prediction of non-coding disease genes.

    PubMed

    Peng, Hui; Lan, Chaowang; Liu, Yuansheng; Liu, Tao; Blumenstein, Michael; Li, Jinyan

    2017-10-03

    Disease-related protein-coding genes have been widely studied, but disease-related non-coding genes remain largely unknown. This work introduces a new vector to represent diseases, and applies the newly vectorized data for a positive-unlabeled learning algorithm to predict and rank disease-related long non-coding RNA (lncRNA) genes. This novel vector representation for diseases consists of two sub-vectors, one is composed of 45 elements, characterizing the information entropies of the disease genes distribution over 45 chromosome substructures. This idea is supported by our observation that some substructures (e.g., the chromosome 6 p-arm) are highly preferred by disease-related protein coding genes, while some (e.g., the 21 p-arm) are not favored at all. The second sub-vector is 30-dimensional, characterizing the distribution of disease gene enriched KEGG pathways in comparison with our manually created pathway groups. The second sub-vector complements with the first one to differentiate between various diseases. Our prediction method outperforms the state-of-the-art methods on benchmark datasets for prioritizing disease related lncRNA genes. The method also works well when only the sequence information of an lncRNA gene is known, or even when a given disease has no currently recognized long non-coding genes.

  2. Chromosome preference of disease genes and vectorization for the prediction of non-coding disease genes

    PubMed Central

    Peng, Hui; Lan, Chaowang; Liu, Yuansheng; Liu, Tao; Blumenstein, Michael; Li, Jinyan

    2017-01-01

    Disease-related protein-coding genes have been widely studied, but disease-related non-coding genes remain largely unknown. This work introduces a new vector to represent diseases, and applies the newly vectorized data for a positive-unlabeled learning algorithm to predict and rank disease-related long non-coding RNA (lncRNA) genes. This novel vector representation for diseases consists of two sub-vectors, one is composed of 45 elements, characterizing the information entropies of the disease genes distribution over 45 chromosome substructures. This idea is supported by our observation that some substructures (e.g., the chromosome 6 p-arm) are highly preferred by disease-related protein coding genes, while some (e.g., the 21 p-arm) are not favored at all. The second sub-vector is 30-dimensional, characterizing the distribution of disease gene enriched KEGG pathways in comparison with our manually created pathway groups. The second sub-vector complements with the first one to differentiate between various diseases. Our prediction method outperforms the state-of-the-art methods on benchmark datasets for prioritizing disease related lncRNA genes. The method also works well when only the sequence information of an lncRNA gene is known, or even when a given disease has no currently recognized long non-coding genes. PMID:29108274

  3. Evaluation of the impacts of climate change on disease vectors through ecological niche modelling.

    PubMed

    Carvalho, B M; Rangel, E F; Vale, M M

    2017-08-01

    Vector-borne diseases are exceptionally sensitive to climate change. Predicting vector occurrence in specific regions is a challenge that disease control programs must meet in order to plan and execute control interventions and climate change adaptation measures. Recently, an increasing number of scientific articles have applied ecological niche modelling (ENM) to study medically important insects and ticks. With a myriad of available methods, it is challenging to interpret their results. Here we review the future projections of disease vectors produced by ENM, and assess their trends and limitations. Tropical regions are currently occupied by many vector species; but future projections indicate poleward expansions of suitable climates for their occurrence and, therefore, entomological surveillance must be continuously done in areas projected to become suitable. The most commonly applied methods were the maximum entropy algorithm, generalized linear models, the genetic algorithm for rule set prediction, and discriminant analysis. Lack of consideration of the full-known current distribution of the target species on models with future projections has led to questionable predictions. We conclude that there is no ideal 'gold standard' method to model vector distributions; researchers are encouraged to test different methods for the same data. Such practice is becoming common in the field of ENM, but still lags behind in studies of disease vectors.

  4. BRST quantization of cosmological perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armendariz-Picon, Cristian; Şengör, Gizem

    2016-11-08

    BRST quantization is an elegant and powerful method to quantize theories with local symmetries. In this article we study the Hamiltonian BRST quantization of cosmological perturbations in a universe dominated by a scalar field, along with the closely related quantization method of Dirac. We describe how both formalisms apply to perturbations in a time-dependent background, and how expectation values of gauge-invariant operators can be calculated in the in-in formalism. Our analysis focuses mostly on the free theory. By appropriate canonical transformations we simplify and diagonalize the free Hamiltonian. BRST quantization in derivative gauges allows us to dramatically simplify the structuremore » of the propagators, whereas Dirac quantization, which amounts to quantization in synchronous gauge, dispenses with the need to introduce ghosts and preserves the locality of the gauge-fixed action.« less

  5. Texting Styles and Information Change of SMS Text Messages in Filipino

    NASA Astrophysics Data System (ADS)

    Cabatbat, Josephine Jill T.; Tapang, Giovanni A.

    2013-02-01

    We identify the different styles of texting in Filipino short message service (SMS) texts and analyze the change in unigram and bigram frequencies due to these styles. Style preference vectors for sample texts were calculated and used to identify the style combination used by an average individual. The change in Shannon entropy of the SMS text is explained in light of a coding process.

  6. Global velocity constrained cloud motion prediction for short-term solar forecasting

    NASA Astrophysics Data System (ADS)

    Chen, Yanjun; Li, Wei; Zhang, Chongyang; Hu, Chuanping

    2016-09-01

    Cloud motion is the primary reason for short-term solar power output fluctuation. In this work, a new cloud motion estimation algorithm using a global velocity constraint is proposed. Compared to the most used Particle Image Velocity (PIV) algorithm, which assumes the homogeneity of motion vectors, the proposed method can capture the accurate motion vector for each cloud block, including both the motional tendency and morphological changes. Specifically, global velocity derived from PIV is first calculated, and then fine-grained cloud motion estimation can be achieved by global velocity based cloud block researching and multi-scale cloud block matching. Experimental results show that the proposed global velocity constrained cloud motion prediction achieves comparable performance to the existing PIV and filtered PIV algorithms, especially in a short prediction horizon.

  7. Stability of various entanglements in the interaction between two two-level atoms with a quantized field under the influences of several decay sources

    NASA Astrophysics Data System (ADS)

    Valizadeh, Sh.; Tavassoly, M. K.; Yazdanpanah, N.

    2018-02-01

    In this paper the interaction between two two-level atoms with a single-mode quantized field is studied. To achieve exact information about the physical properties of the system, one should take into account various sources of dissipation such as photon leakage of cavity, spontaneous emission rate of atoms, internal thermal radiation of cavity and dipole-dipole interaction between the two atoms. In order to achieve the desired goals, we obtain the time evolution of the associated density operator by solving the time-dependent Lindblad equation corresponding to the system. Then, we evaluate the temporal behavior of total population inversion and quantum entanglement between the evolved subsystems, numerically. We clearly show that how the damping parameters affect on the dynamics of considered properties. By analyzing the numerical results, we observe that increasing each of the damping sources leads to faster decay of total population inversion. Also, it is observed that, after starting the interaction, the entanglement between one atom with other parts of the system as well as the entanglement between "atom-atom" subsystem and the "field", tend to some constant values very soon. Moreover, the stable values of entanglement are reduced via increasing the damping factor Γ A (ΓA^{(1)} = ΓA^{(2)} = ΓA ) where ΓA is the spontaneous emission rate of each atom. In addition, we find that by increasing the thermal photons, the entropies (entanglements) tend sooner to some increased stable values. Accordingly, we study the atom-atom entanglement by evaluating the concurrence under the influence of dissipation sources, too. At last, the effects of dissipation sources on the genuine tripartite entanglement between the three subsystems include of two two-level atoms and a quantized field are numerically studied. Due to the important role of stationary entanglement in quantum information processing, our results may provide useful hints for practical protocols which require some appropriate mechanisms to prevent or at least minimize the influence of decoherence phenomenon.

  8. Deformation of second and third quantization

    NASA Astrophysics Data System (ADS)

    Faizal, Mir

    2015-03-01

    In this paper, we will deform the second and third quantized theories by deforming the canonical commutation relations in such a way that they become consistent with the generalized uncertainty principle. Thus, we will first deform the second quantized commutator and obtain a deformed version of the Wheeler-DeWitt equation. Then we will further deform the third quantized theory by deforming the third quantized canonical commutation relation. This way we will obtain a deformed version of the third quantized theory for the multiverse.

  9. Bayesian Image Segmentations by Potts Prior and Loopy Belief Propagation

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuyuki; Kataoka, Shun; Yasuda, Muneki; Waizumi, Yuji; Hsu, Chiou-Ting

    2014-12-01

    This paper presents a Bayesian image segmentation model based on Potts prior and loopy belief propagation. The proposed Bayesian model involves several terms, including the pairwise interactions of Potts models, and the average vectors and covariant matrices of Gauss distributions in color image modeling. These terms are often referred to as hyperparameters in statistical machine learning theory. In order to determine these hyperparameters, we propose a new scheme for hyperparameter estimation based on conditional maximization of entropy in the Potts prior. The algorithm is given based on loopy belief propagation. In addition, we compare our conditional maximum entropy framework with the conventional maximum likelihood framework, and also clarify how the first order phase transitions in loopy belief propagations for Potts models influence our hyperparameter estimation procedures.

  10. Path integral solution for a Klein-Gordon particle in vector and scalar deformed radial Rosen-Morse-type potentials

    NASA Astrophysics Data System (ADS)

    Khodja, A.; Kadja, A.; Benamira, F.; Guechi, L.

    2017-12-01

    The problem of a Klein-Gordon particle moving in equal vector and scalar Rosen-Morse-type potentials is solved in the framework of Feynman's path integral approach. Explicit path integration leads to a closed form for the radial Green's function associated with different shapes of the potentials. For q≤-1, and 1/2α ln | q|0, it is shown that the quantization conditions for the bound state energy levels E_{nr} are transcendental equations which can be solved numerically. Three special cases such as the standard radial Manning-Rosen potential (| q| =1), the standard radial Rosen-Morse potential (V2→ -V2,q=1) and the radial Eckart potential (V1→ -V1,q=1) are also briefly discussed.

  11. Detection of laryngeal function using speech and electroglottographic data.

    PubMed

    Childers, D G; Bae, K S

    1992-01-01

    The purpose of this research was to develop quantitative measures for the assessment of laryngeal function using speech and electroglottographic (EGG) data. We developed two procedures for the detection of laryngeal pathology: 1) a spectral distortion measure using pitch synchronous and asynchronous methods with linear predictive coding (LPC) vectors and vector quantization (VQ) and 2) analysis of the EGG signal using time interval and amplitude difference measures. The VQ procedure was conjectured to offer the possibility of circumventing the need to estimate the glottal volume velocity wave-form by inverse filtering techniques. The EGG procedure was to evaluate data that was "nearly" a direct measure of vocal fold vibratory motion and thus was conjectured to offer the potential for providing an excellent assessment of laryngeal function. A threshold based procedure gave 75.9 and 69.0% probability of pathological detection using procedures 1) and 2), respectively, for 29 patients with pathological voices and 52 normal subjects. The false alarm probability was 9.6% for the normal subjects.

  12. Condition monitoring of 3G cellular networks through competitive neural models.

    PubMed

    Barreto, Guilherme A; Mota, João C M; Souza, Luis G M; Frota, Rewbenio A; Aguayo, Leonardo

    2005-09-01

    We develop an unsupervised approach to condition monitoring of cellular networks using competitive neural algorithms. Training is carried out with state vectors representing the normal functioning of a simulated CDMA2000 network. Once training is completed, global and local normality profiles (NPs) are built from the distribution of quantization errors of the training state vectors and their components, respectively. The global NP is used to evaluate the overall condition of the cellular system. If abnormal behavior is detected, local NPs are used in a component-wise fashion to find abnormal state variables. Anomaly detection tests are performed via percentile-based confidence intervals computed over the global and local NPs. We compared the performance of four competitive algorithms [winner-take-all (WTA), frequency-sensitive competitive learning (FSCL), self-organizing map (SOM), and neural-gas algorithm (NGA)] and the results suggest that the joint use of global and local NPs is more efficient and more robust than current single-threshold methods.

  13. Meson effective mass in the isospin medium in hard-wall AdS/QCD model

    NASA Astrophysics Data System (ADS)

    Mamedov, Shahin

    2016-02-01

    We study a mass splitting of the light vector, axial-vector, and pseudoscalar mesons in the isospin medium in the framework of the hard-wall model. We write an effective mass definition for the interacting gauge fields and scalar field introduced in gauge field theory in the bulk of AdS space-time. Relying on holographic duality we obtain a formula for the effective mass of a boundary meson in terms of derivative operator over the extra bulk coordinate. The effective mass found in this way coincides with the one obtained from finding of poles of the two-point correlation function. In order to avoid introducing distinguished infrared boundaries in the quantization formula for the different mesons from the same isotriplet we introduce extra action terms at this boundary, which reduces distinguished values of this boundary to the same value. Profile function solutions and effective mass expressions were found for the in-medium ρ , a_1, and π mesons.

  14. Quantization selection in the high-throughput H.264/AVC encoder based on the RD

    NASA Astrophysics Data System (ADS)

    Pastuszak, Grzegorz

    2013-10-01

    In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.

  15. Hamiltonian thermodynamics of charged three-dimensional dilatonic black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dias, Goncalo A. S.; Lemos, Jose P. S.; Centro Multidisciplinar de Astrofisica-CENTRA, Departamento de Fisica, Instituto Superior Tecnico-IST, Universidade Tecnica de Lisboa-UTL, Avenida Rovisco Pais 1, 1049-001 Lisboa

    2008-10-15

    The action for a class of three-dimensional dilaton-gravity theories, with an electromagnetic Maxwell field and a cosmological constant, can be recast in a Brans-Dicke-Maxwell type action, with its free {omega} parameter. For a negative cosmological constant, these theories have static, electrically charged, and spherically symmetric black hole solutions. Those theories with well formulated asymptotics are studied through a Hamiltonian formalism, and their thermodynamical properties are found out. The theories studied are general relativity ({omega}{yields}{+-}{infinity}), a dimensionally reduced cylindrical four-dimensional general relativity theory ({omega}=0), and a theory representing a class of theories ({omega}=-3), all with a Maxwell term. The Hamiltonian formalismmore » is set up in three dimensions through foliations on the right region of the Carter-Penrose diagram, with the bifurcation 1-sphere as the left boundary, and anti-de Sitter infinity as the right boundary. The metric functions on the foliated hypersurfaces and the radial component of the vector potential one-form are the canonical coordinates. The Hamiltonian action is written, the Hamiltonian being a sum of constraints. One finds a new action which yields an unconstrained theory with two pairs of canonical coordinates (M,P{sub M};Q,P{sub Q}), where M is the mass parameter, which for {omega}<-(3/2) and for {omega}={+-}{infinity} needs a careful renormalization, P{sub M} is the conjugate momenta of M, Q is the charge parameter, and P{sub Q} is its conjugate momentum. The resulting Hamiltonian is a sum of boundary terms only. A quantization of the theory is performed. The Schroedinger evolution operator is constructed, the trace is taken, and the partition function of the grand canonical ensemble is obtained, where the chemical potential is the scalar electric field {phi}. Like the uncharged cases studied previously, the charged black hole entropies differ, in general, from the usual quarter of the horizon area due to the dilaton.« less

  16. Permutation Entropy and Signal Energy Increase the Accuracy of Neuropathic Change Detection in Needle EMG

    PubMed Central

    2018-01-01

    Background and Objective. Needle electromyography can be used to detect the number of changes and morphological changes in motor unit potentials of patients with axonal neuropathy. General mathematical methods of pattern recognition and signal analysis were applied to recognize neuropathic changes. This study validates the possibility of extending and refining turns-amplitude analysis using permutation entropy and signal energy. Methods. In this study, we examined needle electromyography in 40 neuropathic individuals and 40 controls. The number of turns, amplitude between turns, signal energy, and “permutation entropy” were used as features for support vector machine classification. Results. The obtained results proved the superior classification performance of the combinations of all of the above-mentioned features compared to the combinations of fewer features. The lowest accuracy from the tested combinations of features had peak-ratio analysis. Conclusion. Using the combination of permutation entropy with signal energy, number of turns and mean amplitude in SVM classification can be used to refine the diagnosis of polyneuropathies examined by needle electromyography. PMID:29606959

  17. Iterative variational mode decomposition based automated detection of glaucoma using fundus images.

    PubMed

    Maheshwari, Shishir; Pachori, Ram Bilas; Kanhangad, Vivek; Bhandary, Sulatha V; Acharya, U Rajendra

    2017-09-01

    Glaucoma is one of the leading causes of permanent vision loss. It is an ocular disorder caused by increased fluid pressure within the eye. The clinical methods available for the diagnosis of glaucoma require skilled supervision. They are manual, time consuming, and out of reach of common people. Hence, there is a need for an automated glaucoma diagnosis system for mass screening. In this paper, we present a novel method for an automated diagnosis of glaucoma using digital fundus images. Variational mode decomposition (VMD) method is used in an iterative manner for image decomposition. Various features namely, Kapoor entropy, Renyi entropy, Yager entropy, and fractal dimensions are extracted from VMD components. ReliefF algorithm is used to select the discriminatory features and these features are then fed to the least squares support vector machine (LS-SVM) for classification. Our proposed method achieved classification accuracies of 95.19% and 94.79% using three-fold and ten-fold cross-validation strategies, respectively. This system can aid the ophthalmologists in confirming their manual reading of classes (glaucoma or normal) using fundus images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Generalized composite multiscale permutation entropy and Laplacian score based rolling bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zheng, Jinde; Pan, Haiyang; Yang, Shubao; Cheng, Junsheng

    2018-01-01

    Multiscale permutation entropy (MPE) is a recently proposed nonlinear dynamic method for measuring the randomness and detecting the nonlinear dynamic change of time series and can be used effectively to extract the nonlinear dynamic fault feature from vibration signals of rolling bearing. To solve the drawback of coarse graining process in MPE, an improved MPE method called generalized composite multiscale permutation entropy (GCMPE) was proposed in this paper. Also the influence of parameters on GCMPE and its comparison with the MPE are studied by analyzing simulation data. GCMPE was applied to the fault feature extraction from vibration signal of rolling bearing and then based on the GCMPE, Laplacian score for feature selection and the Particle swarm optimization based support vector machine, a new fault diagnosis method for rolling bearing was put forward in this paper. Finally, the proposed method was applied to analyze the experimental data of rolling bearing. The analysis results show that the proposed method can effectively realize the fault diagnosis of rolling bearing and has a higher fault recognition rate than the existing methods.

  19. Protein Conformational Entropy is Independent of Solvent

    NASA Astrophysics Data System (ADS)

    Nucci, Nathaniel; Moorman, Veronica; Gledhill, John; Valentine, Kathleen; Wand, A. Joshua

    Proteins exhibit most of their conformational entropy in individual bond vector motions on the ps-ns timescale. These motions can be examined through determination of the Lipari-Szabo generalized squared order parameter (O2) using NMR spin relaxation measurements. It is often argued that most protein motions are intimately dependent on the nature of the solvating environment. Here the solvent dependence of the fast protein dynamics is directly assessed. Using the model protein ubiquitin, the order parameters of the backbone and methyl groups are shown to be generally unaffected by up to a six-fold increase in bulk viscosity or by encapsulation in the nanoscale interior of a reverse micelle. In addition, the reverse micelle condition permits direct comparison of protein dynamics to the mobility of the hydration layer; no correlation is observed. The dynamics of aromatic side chains are also assessed and provide an estimate of the length- and timescale of protein motions where solvent dependence is seen. These data demonstrate the solvent independence of conformational entropy, clarifying a long-held misconception in the fundamental behavior of biological macromolecules. Supported by the National Science Foundation.

  20. Classification enhancement for post-stroke dementia using fuzzy neighborhood preserving analysis with QR-decomposition.

    PubMed

    Al-Qazzaz, Noor Kamal; Ali, Sawal; Ahmad, Siti Anom; Escudero, Javier

    2017-07-01

    The aim of the present study was to discriminate the electroencephalogram (EEG) of 5 patients with vascular dementia (VaD), 15 patients with stroke-related mild cognitive impairment (MCI), and 15 control normal subjects during a working memory (WM) task. We used independent component analysis (ICA) and wavelet transform (WT) as a hybrid preprocessing approach for EEG artifact removal. Three different features were extracted from the cleaned EEG signals: spectral entropy (SpecEn), permutation entropy (PerEn) and Tsallis entropy (TsEn). Two classification schemes were applied - support vector machine (SVM) and k-nearest neighbors (kNN) - with fuzzy neighborhood preserving analysis with QR-decomposition (FNPAQR) as a dimensionality reduction technique. The FNPAQR dimensionality reduction technique increased the SVM classification accuracy from 82.22% to 90.37% and from 82.6% to 86.67% for kNN. These results suggest that FNPAQR consistently improves the discrimination of VaD, MCI patients and control normal subjects and it could be a useful feature selection to help the identification of patients with VaD and MCI.

  1. Modified Bose-Einstein and Fermi-Dirac statistics if excitations are localized on an intermediate length scale: applications to non-Debye specific heat.

    PubMed

    Chamberlin, Ralph V; Davis, Bryce F

    2013-10-01

    Disordered systems show deviations from the standard Debye theory of specific heat at low temperatures. These deviations are often attributed to two-level systems of uncertain origin. We find that a source of excess specific heat comes from correlations between quanta of energy if excitations are localized on an intermediate length scale. We use simulations of a simplified Creutz model for a system of Ising-like spins coupled to a thermal bath of Einstein-like oscillators. One feature of this model is that energy is quantized in both the system and its bath, ensuring conservation of energy at every step. Another feature is that the exact entropies of both the system and its bath are known at every step, so that their temperatures can be determined independently. We find that there is a mismatch in canonical temperature between the system and its bath. In addition to the usual finite-size effects in the Bose-Einstein and Fermi-Dirac distributions, if excitations in the heat bath are localized on an intermediate length scale, this mismatch is independent of system size up to at least 10(6) particles. We use a model for correlations between quanta of energy to adjust the statistical distributions and yield a thermodynamically consistent temperature. The model includes a chemical potential for units of energy, as is often used for other types of particles that are quantized and conserved. Experimental evidence for this model comes from its ability to characterize the excess specific heat of imperfect crystals at low temperatures.

  2. Full Spectrum Conversion Using Traveling Pulse Wave Quantization

    DTIC Science & Technology

    2017-03-01

    Full Spectrum Conversion Using Traveling Pulse Wave Quantization Michael S. Kappes Mikko E. Waltari IQ-Analog Corporation San Diego, California...temporal-domain quantization technique called Traveling Pulse Wave Quantization (TPWQ). Full spectrum conversion is defined as the complete...pulse width measurements that are continuously generated hence the name “traveling” pulse wave quantization. Our TPWQ-based ADC is composed of a

  3. Digital TV processing system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Two digital video data compression systems directly applicable to the Space Shuttle TV Communication System were described: (1) For the uplink, a low rate monochrome data compressor is used. The compression is achieved by using a motion detection technique in the Hadamard domain. To transform the variable source rate into a fixed rate, an adaptive rate buffer is provided. (2) For the downlink, a color data compressor is considered. The compression is achieved first by intra-color transformation of the original signal vector, into a vector which has lower information entropy. Then two-dimensional data compression techniques are applied to the Hadamard transformed components of this last vector. Mathematical models and data reliability analyses were also provided for the above video data compression techniques transmitted over a channel encoded Gaussian channel. It was shown that substantial gains can be achieved by the combination of video source and channel coding.

  4. The effects of vector leptoquark on the ℬb(ℬ = Λ,Σ) →ℬμ+μ- decays

    NASA Astrophysics Data System (ADS)

    Wang, Shuai-Wei; Huang, Jin-Shu

    2016-07-01

    In this paper, we have studied the baryonic semileptonic ℬb(ℬ = Λ, Σ) →ℬμ+μ- decays in the vector leptoquark model with U = (3, 3, 2/3) state. Using the parameters’ space constrained through some well-measured decay modes, such as Bs → μ+μ-, Bs -B¯s mixing and B → K∗μ+μ- decays, we show the effects of vector leptoquark state on the double lepton polarization asymmetries of ℬb(ℬ = Λ, Σ) →ℬμ+μ- decays, and find that the double lepton polarization asymmetries, except for PLL, PLN and PNL, are sensitive to the contributions of vector leptoquark model.

  5. Constraining the equation of state with identified particle spectra

    NASA Astrophysics Data System (ADS)

    Monnai, Akihiko; Ollitrault, Jean-Yves

    2017-10-01

    We show that in a central nucleus-nucleus collision, the variation of the mean transverse mass with the multiplicity is determined, up to a rescaling, by the variation of the energy over entropy ratio as a function of the entropy density, thus providing a direct link between experimental data and the equation of state. Each colliding energy thus probes the equation of state at an effective entropy density, whose approximate value is 19 fm-3 for Au+Au collisions at 200 GeV and 41 fm-3 for Pb+Pb collisions at 2.76 TeV, corresponding to temperatures of 227 and 279 MeV if the equation of state is taken from lattice calculations. The relative change of the mean transverse mass as a function of the colliding energy gives a direct measure of the pressure over energy density ratio P /ɛ , at the corresponding effective density. Using Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) data, we obtain P /ɛ =0.21 ±0.10 , in agreement with the lattice value P /ɛ =0.23 in the corresponding temperature range. Measurements over a wide range of colliding energies using a single detector with good particle identification would help reduce the error.

  6. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  7. Conformational Entropy from NMR Relaxation in Proteins: The SRLS Perspective.

    PubMed

    Tchaicheeyan, Oren; Meirovitch, Eva

    2017-02-02

    Conformational entropy changes associated with bond-vector motions in proteins contribute to the free energy of ligand-binding. To derive such contributions, we apply the slowly relaxing local structure (SRLS) approach to NMR relaxation from 15 N-H bonds or C-CDH 2 moieties of several proteins in free and ligand-bound form. The spatial restraints on probe motion, which determine the extent of local order, are expressed in SRLS by a well-defined potential, u(θ). The latter yields the orientational probability density, P eq  = exp(-u(θ)), and hence the related conformational entropy, Ŝ = -∫P eq (θ) ln[P eq (θ)] sin θ dθ (Ŝ is "entropy" in units of k B T, and θ represents the bond-vector orientation in the protein). SRLS is applied to 4-oxalocrotonate tautomerase (4-OT), the acyl-coenzyme A binding protein (ACBP), the C-terminal SH2 domain of phospholipase C γ 1 (PLC γ 1C SH2), the construct dihydrofolate reductase-E:folate (DHFR-E:folate), and their complexes with appropriate ligands, to determine ΔŜ. Eglin C and its V18A and V34A mutants are also studied. Finally, SRLS is applied to the structurally homologous proteins TNfn3 and FNfn10 to characterize within its scope the unusual "dynamics" of the TNfn3 core. Upon ligand-binding, the backbones of 4-OT, ACBP, and PLC γ 1C SH2 show limited, increased, and decreased order, respectively; the cores of DHFR-E:folate and PLC γ 1C SH2 become more ordered. The V18A (V34A) mutation increases (decreases) the order within the eglin C core. The core of TNfn3 is less ordered structurally and more mobile kinetically. Secondary structure versus loops, surface-binding versus core insertion, and ligand size emerged as being important in rationalizing ΔŜ. The consistent and general tool developed herein is expected to provide further insights in future work.

  8. Modeling and analysis of energy quantization effects on single electron inverter performance

    NASA Astrophysics Data System (ADS)

    Dan, Surya Shankar; Mahapatra, Santanu

    2009-08-01

    In this paper, for the first time, the effects of energy quantization on single electron transistor (SET) inverter performance are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantization mainly changes the Coulomb blockade region and drain current of SET devices and thus affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new analytical model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. A compact expression is developed for a novel parameter quantization threshold which is introduced for the first time in this paper. Quantization threshold explicitly defines the maximum energy quantization that an SET inverter logic circuit can withstand before its noise margin falls below a specified tolerance level. It is found that SET inverter designed with CT:CG=1/3 (where CT and CG are tunnel junction and gate capacitances, respectively) offers maximum robustness against energy quantization.

  9. Constraints on rapidity-dependent initial conditions from charged-particle pseudorapidity densities and two-particle correlations

    NASA Astrophysics Data System (ADS)

    Ke, Weiyao; Moreland, J. Scott; Bernhard, Jonah E.; Bass, Steffen A.

    2017-10-01

    We study the initial three-dimensional spatial configuration of the quark-gluon plasma (QGP) produced in relativistic heavy-ion collisions using centrality and pseudorapidity-dependent measurements of the medium's charged particle density and two-particle correlations. A cumulant-generating function is first used to parametrize the rapidity dependence of local entropy deposition and extend arbitrary boost-invariant initial conditions to nonzero beam rapidities. The model is then compared to p +Pb and Pb + Pb charged-particle pseudorapidity densities and two-particle pseudorapidity correlations and systematically optimized using Bayesian parameter estimation to extract high-probability initial condition parameters. The optimized initial conditions are then compared to a number of experimental observables including the pseudorapidity-dependent anisotropic flows, event-plane decorrelations, and flow correlations. We find that the form of the initial local longitudinal entropy profile is well constrained by these experimental measurements.

  10. Image reconstruction of IRAS survey scans

    NASA Technical Reports Server (NTRS)

    Bontekoe, Tj. Romke

    1990-01-01

    The IRAS survey data can be used successfully to produce images of extended objects. The major difficulties, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds is presented, using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spacial resolutions, at different wavelengths. Data estimates of the physical parameters, temperature, density and composition, can be made from the data without prior image (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.

  11. Towards a next generation open-source video codec

    NASA Astrophysics Data System (ADS)

    Bankoski, Jim; Bultje, Ronald S.; Grange, Adrian; Gu, Qunshan; Han, Jingning; Koleszar, John; Mukherjee, Debargha; Wilkins, Paul; Xu, Yaowu

    2013-02-01

    Google has recently been developing a next generation opensource video codec called VP9, as part of the experimental branch of the libvpx repository included in the WebM project (http://www.webmproject.org/). Starting from the VP8 video codec released by Google in 2010 as the baseline, a number of enhancements and new tools have been added to improve the coding efficiency. This paper provides a technical overview of the current status of this project along with comparisons and other stateoftheart video codecs H. 264/AVC and HEVC. The new tools that have been added so far include: larger prediction block sizes up to 64x64, various forms of compound INTER prediction, more modes for INTRA prediction, ⅛pel motion vectors and 8tap switchable subpel interpolation filters, improved motion reference generation and motion vector coding, improved entropy coding and framelevel entropy adaptation for various symbols, improved loop filtering, incorporation of Asymmetric Discrete Sine Transforms and larger 16x16 and 32x32 DCTs, frame level segmentation to group similar areas together, etc. Other tools and various bitstream features are being actively worked on as well. The VP9 bitstream is expected to be finalized by earlyto mid2013. Results show VP9 to be quite competitive in performance with mainstream stateoftheart codecs.

  12. Entropy Production in Field Theories without Time-Reversal Symmetry: Quantifying the Non-Equilibrium Character of Active Matter

    NASA Astrophysics Data System (ADS)

    Nardini, Cesare; Fodor, Étienne; Tjhung, Elsen; van Wijland, Frédéric; Tailleur, Julien; Cates, Michael E.

    2017-04-01

    Active-matter systems operate far from equilibrium because of the continuous energy injection at the scale of constituent particles. At larger scales, described by coarse-grained models, the global entropy production rate S quantifies the probability ratio of forward and reversed dynamics and hence the importance of irreversibility at such scales: It vanishes whenever the coarse-grained dynamics of the active system reduces to that of an effective equilibrium model. We evaluate S for a class of scalar stochastic field theories describing the coarse-grained density of self-propelled particles without alignment interactions, capturing such key phenomena as motility-induced phase separation. We show how the entropy production can be decomposed locally (in real space) or spectrally (in Fourier space), allowing detailed examination of the spatial structure and correlations that underly departures from equilibrium. For phase-separated systems, the local entropy production is concentrated mainly on interfaces, with a bulk contribution that tends to zero in the weak-noise limit. In homogeneous states, we find a generalized Harada-Sasa relation that directly expresses the entropy production in terms of the wave-vector-dependent deviation from the fluctuation-dissipation relation between response functions and correlators. We discuss extensions to the case where the particle density is coupled to a momentum-conserving solvent and to situations where the particle current, rather than the density, should be chosen as the dynamical field. We expect the new conceptual tools developed here to be broadly useful in the context of active matter, allowing one to distinguish when and where activity plays an essential role in the dynamics.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vignat, C.; Bercher, J.-F.

    The family of Tsallis entropies was introduced by Tsallis in 1988. The Shannon entropy belongs to this family as the limit case q{yields}1. The canonical distributions in R{sup n} that maximize this entropy under a covariance constraint are easily derived as Student-t (q<1) and Student-r (q>1) multivariate distributions. A nice geometrical result about these Student-r distributions is that they are marginal of uniform distributions on a sphere of larger dimension d with the relationship p = n+2+(2/q-1). As q{yields}1, we recover the famous Poincare's observation according to which a Gaussian vector can be viewed as the projection of a vectormore » uniformly distributed on the infinite dimensional sphere. A related property in the case q<1 is also available. Often associated to Renyi-Tsallis entropies is the notion of escort distributions. We provide here a geometric interpretation of these distributions. Another result concerns a universal system in physics, the harmonic oscillator: in the usual quantum context, the waveform of the n-th state of the harmonic oscillator is a Gaussian waveform multiplied by the degree n Hermite polynomial. We show, starting from recent results by Carinena et al., that the quantum harmonic oscillator on spaces with constant curvature is described by maximal Tsallis entropy waveforms multiplied by the extended Hermite polynomials derived from this measure. This gives a neat interpretation of the non-extensive parameter q in terms of the curvature of the space the oscillator evolves on; as q{yields}1, the curvature of the space goes to 0 and we recover the classical harmonic oscillator in R{sup 3}.« less

  14. Berezin-Toeplitz quantization and naturally defined star products for Kähler manifolds

    NASA Astrophysics Data System (ADS)

    Schlichenmaier, Martin

    2018-04-01

    For compact quantizable Kähler manifolds the Berezin-Toeplitz quantization schemes, both operator and deformation quantization (star product) are reviewed. The treatment includes Berezin's covariant symbols and the Berezin transform. The general compact quantizable case was done by Bordemann-Meinrenken-Schlichenmaier, Schlichenmaier, and Karabegov-Schlichenmaier. For star products on Kähler manifolds, separation of variables, or equivalently star product of (anti-) Wick type, is a crucial property. As canonically defined star products the Berezin-Toeplitz, Berezin, and the geometric quantization are treated. It turns out that all three are equivalent, but different.

  15. The entropic brain: a theory of conscious states informed by neuroimaging research with psychedelic drugs

    PubMed Central

    Carhart-Harris, Robin L.; Leech, Robert; Hellyer, Peter J.; Shanahan, Murray; Feilding, Amanda; Tagliazucchi, Enzo; Chialvo, Dante R.; Nutt, David

    2014-01-01

    Entropy is a dimensionless quantity that is used for measuring uncertainty about the state of a system but it can also imply physical qualities, where high entropy is synonymous with high disorder. Entropy is applied here in the context of states of consciousness and their associated neurodynamics, with a particular focus on the psychedelic state. The psychedelic state is considered an exemplar of a primitive or primary state of consciousness that preceded the development of modern, adult, human, normal waking consciousness. Based on neuroimaging data with psilocybin, a classic psychedelic drug, it is argued that the defining feature of “primary states” is elevated entropy in certain aspects of brain function, such as the repertoire of functional connectivity motifs that form and fragment across time. Indeed, since there is a greater repertoire of connectivity motifs in the psychedelic state than in normal waking consciousness, this implies that primary states may exhibit “criticality,” i.e., the property of being poised at a “critical” point in a transition zone between order and disorder where certain phenomena such as power-law scaling appear. Moreover, if primary states are critical, then this suggests that entropy is suppressed in normal waking consciousness, meaning that the brain operates just below criticality. It is argued that this entropy suppression furnishes normal waking consciousness with a constrained quality and associated metacognitive functions, including reality-testing and self-awareness. It is also proposed that entry into primary states depends on a collapse of the normally highly organized activity within the default-mode network (DMN) and a decoupling between the DMN and the medial temporal lobes (which are normally significantly coupled). These hypotheses can be tested by examining brain activity and associated cognition in other candidate primary states such as rapid eye movement (REM) sleep and early psychosis and comparing these with non-primary states such as normal waking consciousness and the anaesthetized state. PMID:24550805

  16. The entropic brain: a theory of conscious states informed by neuroimaging research with psychedelic drugs.

    PubMed

    Carhart-Harris, Robin L; Leech, Robert; Hellyer, Peter J; Shanahan, Murray; Feilding, Amanda; Tagliazucchi, Enzo; Chialvo, Dante R; Nutt, David

    2014-01-01

    Entropy is a dimensionless quantity that is used for measuring uncertainty about the state of a system but it can also imply physical qualities, where high entropy is synonymous with high disorder. Entropy is applied here in the context of states of consciousness and their associated neurodynamics, with a particular focus on the psychedelic state. The psychedelic state is considered an exemplar of a primitive or primary state of consciousness that preceded the development of modern, adult, human, normal waking consciousness. Based on neuroimaging data with psilocybin, a classic psychedelic drug, it is argued that the defining feature of "primary states" is elevated entropy in certain aspects of brain function, such as the repertoire of functional connectivity motifs that form and fragment across time. Indeed, since there is a greater repertoire of connectivity motifs in the psychedelic state than in normal waking consciousness, this implies that primary states may exhibit "criticality," i.e., the property of being poised at a "critical" point in a transition zone between order and disorder where certain phenomena such as power-law scaling appear. Moreover, if primary states are critical, then this suggests that entropy is suppressed in normal waking consciousness, meaning that the brain operates just below criticality. It is argued that this entropy suppression furnishes normal waking consciousness with a constrained quality and associated metacognitive functions, including reality-testing and self-awareness. It is also proposed that entry into primary states depends on a collapse of the normally highly organized activity within the default-mode network (DMN) and a decoupling between the DMN and the medial temporal lobes (which are normally significantly coupled). These hypotheses can be tested by examining brain activity and associated cognition in other candidate primary states such as rapid eye movement (REM) sleep and early psychosis and comparing these with non-primary states such as normal waking consciousness and the anaesthetized state.

  17. Expectation values of twist fields and universal entanglement saturation of the free massive boson

    NASA Astrophysics Data System (ADS)

    Blondeau-Fournier, Olivier; Doyon, Benjamin

    2017-07-01

    The evaluation of vacuum expectation values (VEVs) in massive integrable quantum field theory (QFT) is a nontrivial renormalization-group ‘connection problem’—relating large and short distance asymptotics—and is in general unsolved. This is particularly relevant in the context of entanglement entropy, where VEVs of branch-point twist fields give universal saturation predictions. We propose a new method to compute VEVs of twist fields associated to continuous symmetries in QFT. The method is based on a differential equation in the continuous symmetry parameter, and gives VEVs as infinite form-factor series which truncate at two-particle level in free QFT. We verify the method by studying U(1) twist fields in free models, which are simply related to the branch-point twist fields. We provide the first exact formulae for the VEVs of such fields in the massive uncompactified free boson model, checking against an independent calculation based on angular quantization. We show that logarithmic terms, overlooked in the original work of Callan and Wilczek (1994 Phys. Lett. B 333 55-61), appear both in the massless and in the massive situations. This implies that, in agreement with numerical form-factor observations by Bianchini and Castro-Alvaredo (2016 Nucl. Phys. B 913 879-911), the standard power-law short-distance behavior is corrected by a logarithmic factor. We discuss how this gives universal formulae for the saturation of entanglement entropy of a single interval in near-critical harmonic chains, including loglog corrections.

  18. A Support Vector Learning-Based Particle Filter Scheme for Target Localization in Communication-Constrained Underwater Acoustic Sensor Networks

    PubMed Central

    Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping

    2017-01-01

    Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid “particle degeneracy” problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network. PMID:29267252

  19. A Support Vector Learning-Based Particle Filter Scheme for Target Localization in Communication-Constrained Underwater Acoustic Sensor Networks.

    PubMed

    Li, Xinbin; Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping

    2017-12-21

    Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid "particle degeneracy" problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network.

  20. A New Approach for Fingerprint Image Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less

  1. Axial vector Z‧ and anomaly cancellation

    NASA Astrophysics Data System (ADS)

    Ismail, Ahmed; Keung, Wai-Yee; Tsao, Kuo-Hsing; Unwin, James

    2017-05-01

    Whilst the prospect of new Z‧ gauge bosons with only axial couplings to the Standard Model (SM) fermions is widely discussed, examples of anomaly-free renormalisable models are lacking in the literature. We look to remedy this by constructing several motivated examples. Specifically, we consider axial vectors which couple universally to all SM fermions, as well as those which are generation-specific, leptophilic, and leptophobic. Anomaly cancellation typically requires the presence of new coloured and charged chiral fermions, and we argue that in a large class of models masses of these new states are expected to be comparable to that of the axial vector. Finally, an axial vector mediator could provide a portal between SM and hidden sector states, and we also consider the possibility that the axial vector couples to dark matter. If the dark matter relic density is set due to freeze-out via the axial vector, this strongly constrains the parameter space.

  2. Interferometric tests of Planckian quantum geometry models

    DOE PAGES

    Kwon, Ohkyung; Hogan, Craig J.

    2016-04-19

    The effect of Planck scale quantum geometrical effects on measurements with interferometers is estimated with standard physics, and with a variety of proposed extensions. It is shown that effects are negligible in standard field theory with canonically quantized gravity. Statistical noise levels are estimated in a variety of proposals for nonstandard metric fluctuations, and these alternatives are constrained using upper bounds on stochastic metric fluctuations from LIGO. Idealized models of several interferometer system architectures are used to predict signal noise spectra in a quantum geometry that cannot be described by a fluctuating metric, in which position noise arises from holographicmore » bounds on directional information. Lastly, predictions in this case are shown to be close to current and projected experimental bounds.« less

  3. Shannon entropy in the research on stationary regimes and the evolution of complexity

    NASA Astrophysics Data System (ADS)

    Eskov, V. M.; Eskov, V. V.; Vochmina, Yu. V.; Gorbunov, D. V.; Ilyashenko, L. K.

    2017-05-01

    The questions of the identification of complex biological systems (complexity) as special self-organizing systems or systems of the third type first defined by W. Weaver in 1948 continue to be of interest. No reports on the evaluation of entropy for systems of the third type were found among the publications currently available to the authors. The present study addresses the parameters of muscle biopotentials recorded using surface interference electromyography and presents the results of calculation of the Shannon entropy, autocorrelation functions, and statistical distribution functions for electromyograms of subjects in different physiological states (rest and tension of muscles). The results do not allow for statistically reliable discrimination between the functional states of muscles. However, the data obtained by calculating electromyogram quasiatttractor parameters and matrices of paired comparisons of electromyogram samples (calculation of the number k of "coinciding" pairs among the electromyogram samples) provide an integral characteristic that allows the identification of substantial differences between the state of rest and the different states of functional activity. Modifications and implementation of new methods in combination with the novel methods of the theory of chaos and self-organization are obviously essential. The stochastic approach paradigm is not applicable to systems of the third type due to continuous and chaotic changes of the parameters of the state vector x( t) of an organism or the contrasting constancy of these parameters (in the case of entropy).

  4. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  5. Quantifying uncertainty due to fission-fusion dynamics as a component of social complexity.

    PubMed

    Ramos-Fernandez, Gabriel; King, Andrew J; Beehner, Jacinta C; Bergman, Thore J; Crofoot, Margaret C; Di Fiore, Anthony; Lehmann, Julia; Schaffner, Colleen M; Snyder-Mackler, Noah; Zuberbühler, Klaus; Aureli, Filippo; Boyer, Denis

    2018-05-30

    Groups of animals (including humans) may show flexible grouping patterns, in which temporary aggregations or subgroups come together and split, changing composition over short temporal scales, (i.e. fission and fusion). A high degree of fission-fusion dynamics may constrain the regulation of social relationships, introducing uncertainty in interactions between group members. Here we use Shannon's entropy to quantify the predictability of subgroup composition for three species known to differ in the way their subgroups come together and split over time: spider monkeys ( Ateles geoffroyi ), chimpanzees ( Pan troglodytes ) and geladas ( Theropithecus gelada ). We formulate a random expectation of entropy that considers subgroup size variation and sample size, against which the observed entropy in subgroup composition can be compared. Using the theory of set partitioning, we also develop a method to estimate the number of subgroups that the group is likely to be divided into, based on the composition and size of single focal subgroups. Our results indicate that Shannon's entropy and the estimated number of subgroups present at a given time provide quantitative metrics of uncertainty in the social environment (within which social relationships must be regulated) for groups with different degrees of fission-fusion dynamics. These metrics also represent an indirect quantification of the cognitive challenges posed by socially dynamic environments. Overall, our novel methodological approach provides new insight for understanding the evolution of social complexity and the mechanisms to cope with the uncertainty that results from fission-fusion dynamics. © 2017 The Author(s).

  6. Dielectric properties of classical and quantized ionic fluids.

    PubMed

    Høye, Johan S

    2010-06-01

    We study time-dependent correlation functions of classical and quantum gases using methods of equilibrium statistical mechanics for systems of uniform as well as nonuniform densities. The basis for our approach is the path integral formalism of quantum mechanical systems. With this approach the statistical mechanics of a quantum mechanical system becomes the equivalent of a classical polymer problem in four dimensions where imaginary time is the fourth dimension. Several nontrivial results for quantum systems have been obtained earlier by this analogy. Here, we will focus upon the presence of a time-dependent electromagnetic pair interaction where the electromagnetic vector potential that depends upon currents, will be present. Thus both density and current correlations are needed to evaluate the influence of this interaction. Then we utilize that densities and currents can be expressed by polarizations by which the ionic fluid can be regarded as a dielectric one for which a nonlocal susceptibility is found. This nonlocality has as a consequence that we find no contribution from a possible transverse electric zero-frequency mode for the Casimir force between metallic plates. Further, we establish expressions for a leading correction to ab initio calculations for the energies of the quantized electrons of molecules where now retardation effects also are taken into account.

  7. First integrals of motion in a gauge covariant framework, Killing-Maxwell system and quantum anomalies

    NASA Astrophysics Data System (ADS)

    Visinescu, M.

    2012-10-01

    Hidden symmetries in a covariant Hamiltonian framework are investigated. The special role of the Stackel-Killing and Killing-Yano tensors is pointed out. The covariant phase-space is extended to include external gauge fields and scalar potentials. We investigate the possibility for a higher-order symmetry to survive when the electromagnetic interactions are taken into account. Aconcrete realization of this possibility is given by the Killing-Maxwell system. The classical conserved quantities do not generally transfer to the quantized systems producing quantum gravitational anomalies. As a rule the conformal extension of the Killing vectors and tensors does not produce symmetry operators for the Klein-Gordon operator.

  8. Asymptotics of the evolution semigroup associated with a scalar field in the presence of a non-linear electromagnetic field

    NASA Astrophysics Data System (ADS)

    Albeverio, Sergio; Tamura, Hiroshi

    2018-04-01

    We consider a model describing the coupling of a vector-valued and a scalar homogeneous Markovian random field over R4, interpreted as expressing the interaction between a charged scalar quantum field coupled with a nonlinear quantized electromagnetic field. Expectations of functionals of the random fields are expressed by Brownian bridges. Using this, together with Feynman-Kac-Itô type formulae and estimates on the small time and large time behaviour of Brownian functionals, we prove asymptotic upper and lower bounds on the kernel of the transition semigroup for our model. The upper bound gives faster than exponential decay for large distances of the corresponding resolvent (propagator).

  9. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  10. Coherent distributions for the rigid rotator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigorescu, Marius

    2016-06-15

    Coherent solutions of the classical Liouville equation for the rigid rotator are presented as positive phase-space distributions localized on the Lagrangian submanifolds of Hamilton-Jacobi theory. These solutions become Wigner-type quasiprobability distributions by a formal discretization of the left-invariant vector fields from their Fourier transform in angular momentum. The results are consistent with the usual quantization of the anisotropic rotator, but the expected value of the Hamiltonian contains a finite “zero point” energy term. It is shown that during the time when a quasiprobability distribution evolves according to the Liouville equation, the related quantum wave function should satisfy the time-dependent Schrödingermore » equation.« less

  11. Visual communications and image processing '92; Proceedings of the Meeting, Boston, MA, Nov. 18-20, 1992

    NASA Astrophysics Data System (ADS)

    Maragos, Petros

    The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)

  12. SAR data compression: Application, requirements, and designs

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Chang, C. Y.

    1991-01-01

    The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.

  13. Quantum theory of structured monochromatic light

    NASA Astrophysics Data System (ADS)

    Punnoose, Alexander; Tu, J. J.

    2017-08-01

    Applications that envisage utilizing the orbital angular momentum (OAM) at the single photon level assume that the OAM degrees of freedom of the photons are orthogonal. To test this critical assumption, we quantize the beam-like solutions of the vector Helmholtz equation from first principles. We show that although the photon operators of a diffracting monochromatic beam do not in general satisfy the canonical commutation relations, implying that the photon states in Fock space are not orthogonal, the states are bona fide eigenstates of the number and Hamiltonian operators. As a result, the representation for the photon operators presented in this work form a natural basis to study structured monochromatic light at the single photon level.

  14. Quantization and fractional quantization of currents in periodically driven stochastic systems. I. Average currents

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Klein, John R.; Sinitsyn, Nikolai A.

    2012-04-01

    This article studies Markovian stochastic motion of a particle on a graph with finite number of nodes and periodically time-dependent transition rates that satisfy the detailed balance condition at any time. We show that under general conditions, the currents in the system on average become quantized or fractionally quantized for adiabatic driving at sufficiently low temperature. We develop the quantitative theory of this quantization and interpret it in terms of topological invariants. By implementing the celebrated Kirchhoff theorem we derive a general and explicit formula for the average generated current that plays a role of an efficient tool for treating the current quantization effects.

  15. High-quality lossy compression: current and future trends

    NASA Astrophysics Data System (ADS)

    McLaughlin, Steven W.

    1995-01-01

    This paper is concerned with current and future trends in the lossy compression of real sources such as imagery, video, speech and music. We put all lossy compression schemes into common framework where each can be characterized in terms of three well-defined advantages: cell shape, region shape and memory advantages. We concentrate on image compression and discuss how new entropy constrained trellis-based compressors achieve cell- shape, region-shape and memory gain resulting in high fidelity and high compression.

  16. Computer Science and Statistics. Proceedings of the Symposium on the Interface (18th) Held on March 19-21, 1986 in Fort Collins, Colorado.

    DTIC Science & Technology

    1987-08-26

    example, expert systems research would benefit examples are the Acute Renal Failure [15] system, the if it could attract statisticians to assist in...research projects including the Acute Renal Failure [15] system, the 6. EXPLAINING COMPLEX REASONING INTERNIST-] [22] system for diagnosis within the...the MEDAS and Acute Renal Failure systems. task at any point in reasoning about a case is constrained to Entropy-discriminate makes use of a measure

  17. Divvy Economies Based On (An Abstract) Temperature

    NASA Astrophysics Data System (ADS)

    Collins, Dennis G.

    2004-04-01

    The Leontief Input-Output economic system can provide a model for a one-parameter family of economic systems based on an abstract temperature T. In particular, given a normalized input-output matrix R and taking R= R(1), a family of economic systems R(1/T)=R(α) is developed that represents heating (T>1) and cooling (T<1) of the economy relative to T=1. .The economy for a given value of T represents the solution of a constrained maximum entropy problem.

  18. Optimal Quantization Scheme for Data-Efficient Target Tracking via UWSNs Using Quantized Measurements.

    PubMed

    Zhang, Senlin; Chen, Huayan; Liu, Meiqin; Zhang, Qunfei

    2017-11-07

    Target tracking is one of the broad applications of underwater wireless sensor networks (UWSNs). However, as a result of the temporal and spatial variability of acoustic channels, underwater acoustic communications suffer from an extremely limited bandwidth. In order to reduce network congestion, it is important to shorten the length of the data transmitted from local sensors to the fusion center by quantization. Although quantization can reduce bandwidth cost, it also brings about bad tracking performance as a result of information loss after quantization. To solve this problem, this paper proposes an optimal quantization-based target tracking scheme. It improves the tracking performance of low-bit quantized measurements by minimizing the additional covariance caused by quantization. The simulation demonstrates that our scheme performs much better than the conventional uniform quantization-based target tracking scheme and the increment of the data length affects our scheme only a little. Its tracking performance improves by only 4.4% from 2- to 3-bit, which means our scheme weakly depends on the number of data bits. Moreover, our scheme also weakly depends on the number of participate sensors, and it can work well in sparse sensor networks. In a 6 × 6 × 6 sensor network, compared with 4 × 4 × 4 sensor networks, the number of participant sensors increases by 334.92%, while the tracking accuracy using 1-bit quantized measurements improves by only 50.77%. Overall, our optimal quantization-based target tracking scheme can achieve the pursuit of data-efficiency, which fits the requirements of low-bandwidth UWSNs.

  19. Study on vibration characteristics and fault diagnosis method of oil-immersed flat wave reactor in Arctic area converter station

    NASA Astrophysics Data System (ADS)

    Lai, Wenqing; Wang, Yuandong; Li, Wenpeng; Sun, Guang; Qu, Guomin; Cui, Shigang; Li, Mengke; Wang, Yongqiang

    2017-10-01

    Based on long term vibration monitoring of the No.2 oil-immersed fat wave reactor in the ±500kV converter station in East Mongolia, the vibration signals in normal state and in core loose fault state were saved. Through the time-frequency analysis of the signals, the vibration characteristics of the core loose fault were obtained, and a fault diagnosis method based on the dual tree complex wavelet (DT-CWT) and support vector machine (SVM) was proposed. The vibration signals were analyzed by DT-CWT, and the energy entropy of the vibration signals were taken as the feature vector; the support vector machine was used to train and test the feature vector, and the accurate identification of the core loose fault of the flat wave reactor was realized. Through the identification of many groups of normal and core loose fault state vibration signals, the diagnostic accuracy of the result reached 97.36%. The effectiveness and accuracy of the method in the fault diagnosis of the flat wave reactor core is verified.

  20. Detection of potential mosquito breeding sites based on community sourced geotagged images

    NASA Astrophysics Data System (ADS)

    Agarwal, Ankit; Chaudhuri, Usashi; Chaudhuri, Subhasis; Seetharaman, Guna

    2014-06-01

    Various initiatives have been taken all over the world to involve the citizens in the collection and reporting of data to make better and informed data-driven decisions. Our work shows how the geotagged images collected through the general population can be used to combat Malaria and Dengue by identifying and visualizing localities that contain potential mosquito breeding sites. Our method first employs image quality assessment on the client side to reject the images with distortions like blur and artifacts. Each geotagged image received on the server is converted into a feature vector using the bag of visual words model. We train an SVM classifier on a histogram-based feature vector obtained after the vector quantization of SIFT features to discriminate images containing either a small stagnant water body like puddle, or open containers and tires, bushes etc. from those that contain flowing water, manicured lawns, tires attached to a vehicle etc. A geographical heat map is generated by assigning a specific location a probability value of it being a potential mosquito breeding ground of mosquito using feature level fusion or the max approach presented in the paper. The heat map thus generated can be used by concerned health authorities to take appropriate action and to promote civic awareness.

  1. Velocity-tunable slow beams of cold O2 in a single spin-rovibronic state with full angular-momentum orientation by multistage Zeeman deceleration

    NASA Astrophysics Data System (ADS)

    Wiederkehr, A. W.; Schmutz, H.; Motsch, M.; Merkt, F.

    2012-08-01

    Cold samples of oxygen molecules in supersonic beams have been decelerated from initial velocities of 390 and 450 m s-1 to final velocities in the range between 150 and 280 m s-1 using a 90-stage Zeeman decelerator. (2 + 1) resonance-enhanced-multiphoton-ionization (REMPI) spectra of the 3sσ g 3Π g (C) ? two-photon transition of O2 have been recorded to characterize the state selectivity of the deceleration process. The decelerated molecular sample was found to consist exclusively of molecules in the J ‧‧ = 2 spin-rotational component of the X ? ground state of O2. Measurements of the REMPI spectra using linearly polarized laser radiation with polarization vector parallel to the decelerator axis, and thus to the magnetic-field vector of the deceleration solenoids, further showed that only the ? magnetic sublevel of the N‧‧ = 1, J ‧‧ = 2 spin-rotational level is populated in the decelerated sample, which therefore is characterized by a fully oriented total-angular-momentum vector. By maintaining a weak quantization magnetic field beyond the decelerator, the polarization of the sample could be maintained over the 5 cm distance separating the last deceleration solenoid and the detection region.

  2. Two generalizations of Kohonen clustering

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.; Pal, Nikhil R.; Tsao, Eric C. K.

    1993-01-01

    The relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms is discussed. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. The impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often leads ideas to clustering algorithms is discussed. Then two generalizations of LVQ that are explicitly designed as clustering algorithms are presented; these algorithms are referred to as generalized LVQ = GLVQ; and fuzzy LVQ = FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution - these are taken care of automatically. Segmentation of a gray tone image is used as a typical application of these algorithms to illustrate the performance of GLVQ/FLVQ.

  3. LVQ and backpropagation neural networks applied to NASA SSME data

    NASA Technical Reports Server (NTRS)

    Doniere, Timothy F.; Dhawan, Atam P.

    1993-01-01

    Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.

  4. Data on Support Vector Machines (SVM) model to forecast photovoltaic power.

    PubMed

    Malvoni, M; De Giorgi, M G; Congedo, P M

    2016-12-01

    The data concern the photovoltaic (PV) power, forecasted by a hybrid model that considers weather variations and applies a technique to reduce the input data size, as presented in the paper entitled "Photovoltaic forecast based on hybrid pca-lssvm using dimensionality reducted data" (M. Malvoni, M.G. De Giorgi, P.M. Congedo, 2015) [1]. The quadratic Renyi entropy criteria together with the principal component analysis (PCA) are applied to the Least Squares Support Vector Machines (LS-SVM) to predict the PV power in the day-ahead time frame. The data here shared represent the proposed approach results. Hourly PV power predictions for 1,3,6,12, 24 ahead hours and for different data reduction sizes are provided in Supplementary material.

  5. Modeling Malaria Vector Distribution under Climate Change Scenarios in Kenya

    NASA Astrophysics Data System (ADS)

    Ngaina, J. N.

    2017-12-01

    Projecting the distribution of malaria vectors under climate change is essential for planning integrated vector control strategies for sustaining elimination and preventing reintroduction of malaria. However, in Kenya, little knowledge exists on the possible effects of climate change on malaria vectors. Here we assess the potential impact of future climate change on locally dominant Anopheles vectors including Anopheles gambiae, Anopheles arabiensis, Anopheles merus, Anopheles funestus, Anopheles pharoensis and Anopheles nili. Environmental data (Climate, Land cover and elevation) and primary empirical geo-located species-presence data were identified. The principle of maximum entropy (Maxent) was used to model the species' potential distribution area under paleoclimate, current and future climates. The Maxent model was highly accurate with a statistically significant AUC value. Simulation-based estimates suggest that the environmentally suitable area (ESA) for Anopheles gambiae, An. arabiensis, An. funestus and An. pharoensis would increase under all two scenarios for mid-century (2016-2045), but decrease for end century (2071-2100). An increase in ESA of An. Funestus was estimated under medium stabilizing (RCP4.5) and very heavy (RCP8.5) emission scenarios for mid-century. Our findings can be applied in various ways such as the identification of additional localities where Anopheles malaria vectors may already exist, but has not yet been detected and the recognition of localities where it is likely to spread to. Moreover, it will help guide future sampling location decisions, help with the planning of vector control suites nationally and encourage broader research inquiry into vector species niche modeling

  6. Perceptual Optimization of DCT Color Quantization Matrices

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.

  7. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  8. Novel inter and intra prediction tools under consideration for the emerging AV1 video codec

    NASA Astrophysics Data System (ADS)

    Joshi, Urvang; Mukherjee, Debargha; Han, Jingning; Chen, Yue; Parker, Sarah; Su, Hui; Chiang, Angie; Xu, Yaowu; Liu, Zoe; Wang, Yunqing; Bankoski, Jim; Wang, Chen; Keyder, Emil

    2017-09-01

    Google started the WebM Project in 2010 to develop open source, royalty- free video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec AV1, in a consortium of major tech companies called the Alliance for Open Media, that achieves at least a generational improvement in coding efficiency over VP9. In this paper, we focus primarily on new tools in AV1 that improve the prediction of pixel blocks before transforms, quantization and entropy coding are invoked. Specifically, we describe tools and coding modes that improve intra, inter and combined inter-intra prediction. Results are presented on standard test sets.

  9. Origins and properties of kappa distributions in space plasmas

    NASA Astrophysics Data System (ADS)

    Livadiotis, George

    2016-07-01

    Classical particle systems reside at thermal equilibrium with their velocity distribution function stabilized into a Maxwell distribution. On the contrary, collisionless and correlated particle systems, such as the space and astrophysical plasmas, are characterized by a non-Maxwellian behavior, typically described by the so-called kappa distributions. Empirical kappa distributions have become increasingly widespread across space and plasma physics. However, a breakthrough in the field came with the connection of kappa distributions to the solid statistical framework of Tsallis non-extensive statistical mechanics. Understanding the statistical origin of kappa distributions was the cornerstone of further theoretical developments and applications, some of which will be presented in this talk: (i) The physical meaning of thermal parameters, e.g., temperature and kappa index; (ii) the multi-particle description of kappa distributions; (iii) the phase-space kappa distribution of a Hamiltonian with non-zero potential; (iv) the Sackur-Tetrode entropy for kappa distributions, and (v) the new quantization constant, h _{*}˜10 ^{-22} Js.

  10. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1993-01-01

    The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.

  11. Simultaneous Conduction and Valence Band Quantization in Ultrashallow High-Density Doping Profiles in Semiconductors

    NASA Astrophysics Data System (ADS)

    Mazzola, F.; Wells, J. W.; Pakpour-Tabrizi, A. C.; Jackman, R. B.; Thiagarajan, B.; Hofmann, Ph.; Miwa, J. A.

    2018-01-01

    We demonstrate simultaneous quantization of conduction band (CB) and valence band (VB) states in silicon using ultrashallow, high-density, phosphorus doping profiles (so-called Si:P δ layers). We show that, in addition to the well-known quantization of CB states within the dopant plane, the confinement of VB-derived states between the subsurface P dopant layer and the Si surface gives rise to a simultaneous quantization of VB states in this narrow region. We also show that the VB quantization can be explained using a simple particle-in-a-box model, and that the number and energy separation of the quantized VB states depend on the depth of the P dopant layer beneath the Si surface. Since the quantized CB states do not show a strong dependence on the dopant depth (but rather on the dopant density), it is straightforward to exhibit control over the properties of the quantized CB and VB states independently of each other by choosing the dopant density and depth accordingly, thus offering new possibilities for engineering quantum matter.

  12. Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation

    NASA Astrophysics Data System (ADS)

    Wan, Yiwen; Duraisamy, Prakash; Alam, Mohammad S.; Buckles, Bill

    2012-06-01

    Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.

  13. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  14. Analysis and Recognition of Traditional Chinese Medicine Pulse Based on the Hilbert-Huang Transform and Random Forest in Patients with Coronary Heart Disease

    PubMed Central

    Wang, Yiqin; Yan, Hanxia; Yan, Jianjun; Yuan, Fengyin; Xu, Zhaoxia; Liu, Guoping; Xu, Wenjie

    2015-01-01

    Objective. This research provides objective and quantitative parameters of the traditional Chinese medicine (TCM) pulse conditions for distinguishing between patients with the coronary heart disease (CHD) and normal people by using the proposed classification approach based on Hilbert-Huang transform (HHT) and random forest. Methods. The energy and the sample entropy features were extracted by applying the HHT to TCM pulse by treating these pulse signals as time series. By using the random forest classifier, the extracted two types of features and their combination were, respectively, used as input data to establish classification model. Results. Statistical results showed that there were significant differences in the pulse energy and sample entropy between the CHD group and the normal group. Moreover, the energy features, sample entropy features, and their combination were inputted as pulse feature vectors; the corresponding average recognition rates were 84%, 76.35%, and 90.21%, respectively. Conclusion. The proposed approach could be appropriately used to analyze pulses of patients with CHD, which can lay a foundation for research on objective and quantitative criteria on disease diagnosis or Zheng differentiation. PMID:26180536

  15. Analysis and Recognition of Traditional Chinese Medicine Pulse Based on the Hilbert-Huang Transform and Random Forest in Patients with Coronary Heart Disease.

    PubMed

    Guo, Rui; Wang, Yiqin; Yan, Hanxia; Yan, Jianjun; Yuan, Fengyin; Xu, Zhaoxia; Liu, Guoping; Xu, Wenjie

    2015-01-01

    Objective. This research provides objective and quantitative parameters of the traditional Chinese medicine (TCM) pulse conditions for distinguishing between patients with the coronary heart disease (CHD) and normal people by using the proposed classification approach based on Hilbert-Huang transform (HHT) and random forest. Methods. The energy and the sample entropy features were extracted by applying the HHT to TCM pulse by treating these pulse signals as time series. By using the random forest classifier, the extracted two types of features and their combination were, respectively, used as input data to establish classification model. Results. Statistical results showed that there were significant differences in the pulse energy and sample entropy between the CHD group and the normal group. Moreover, the energy features, sample entropy features, and their combination were inputted as pulse feature vectors; the corresponding average recognition rates were 84%, 76.35%, and 90.21%, respectively. Conclusion. The proposed approach could be appropriately used to analyze pulses of patients with CHD, which can lay a foundation for research on objective and quantitative criteria on disease diagnosis or Zheng differentiation.

  16. Scalable hybrid computation with spikes.

    PubMed

    Sarpeshkar, Rahul; O'Halloran, Micah

    2002-09-01

    We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. And, third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. And, third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be used to perform learning, vector quantization, spike pattern recognition and generation, and how they may be reconfigured.

  17. Quantification of knee vibroarthrographic signal irregularity associated with patellofemoral joint cartilage pathology based on entropy and envelope amplitude measures.

    PubMed

    Wu, Yunfeng; Chen, Pinnan; Luo, Xin; Huang, Hui; Liao, Lifang; Yao, Yuchen; Wu, Meihong; Rangayyan, Rangaraj M

    2016-07-01

    Injury of knee joint cartilage may result in pathological vibrations between the articular surfaces during extension and flexion motions. The aim of this paper is to analyze and quantify vibroarthrographic (VAG) signal irregularity associated with articular cartilage degeneration and injury in the patellofemoral joint. The symbolic entropy (SyEn), approximate entropy (ApEn), fuzzy entropy (FuzzyEn), and the mean, standard deviation, and root-mean-squared (RMS) values of the envelope amplitude, were utilized to quantify the signal fluctuations associated with articular cartilage pathology of the patellofemoral joint. The quadratic discriminant analysis (QDA), generalized logistic regression analysis (GLRA), and support vector machine (SVM) methods were used to perform signal pattern classifications. The experimental results showed that the patients with cartilage pathology (CP) possess larger SyEn and ApEn, but smaller FuzzyEn, over the statistical significance level of the Wilcoxon rank-sum test (p<0.01), than the healthy subjects (HS). The mean, standard deviation, and RMS values computed from the amplitude difference between the upper and lower signal envelopes are also consistently and significantly larger (p<0.01) for the group of CP patients than for the HS group. The SVM based on the entropy and envelope amplitude features can provide superior classification performance as compared with QDA and GLRA, with an overall accuracy of 0.8356, sensitivity of 0.9444, specificity of 0.8, Matthews correlation coefficient of 0.6599, and an area of 0.9212 under the receiver operating characteristic curve. The SyEn, ApEn, and FuzzyEn features can provide useful information about pathological VAG signal irregularity based on different entropy metrics. The statistical parameters of signal envelope amplitude can be used to characterize the temporal fluctuations related to the cartilage pathology. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Probing Low-Mass Vector Bosons with Parity Nonconservation and Nuclear Anapole Moment Measurements in Atoms and Molecules

    NASA Astrophysics Data System (ADS)

    Dzuba, V. A.; Flambaum, V. V.; Stadnik, Y. V.

    2017-12-01

    In the presence of P -violating interactions, the exchange of vector bosons between electrons and nucleons induces parity-nonconserving (PNC) effects in atoms and molecules, while the exchange of vector bosons between nucleons induces anapole moments of nuclei. We perform calculations of such vector-mediated PNC effects in Cs, Ba+ , Yb, Tl, Fr, and Ra+ using the same relativistic many-body approaches as in earlier calculations of standard-model PNC effects, but with the long-range operator of the weak interaction. We calculate nuclear anapole moments due to vector-boson exchange using a simple nuclear model. From measured and predicted (within the standard model) values for the PNC amplitudes in Cs, Yb, and Tl, as well as the nuclear anapole moment of 133Cs, we constrain the P -violating vector-pseudovector nucleon-electron and nucleon-proton interactions mediated by a generic vector boson of arbitrary mass. Our limits improve on existing bounds from other experiments by many orders of magnitude over a very large range of vector-boson masses.

  19. Zn-metalloprotease sequences in extremophiles

    NASA Astrophysics Data System (ADS)

    Holden, T.; Dehipawala, S.; Golebiewska, U.; Cheung, E.; Tremberger, G., Jr.; Williams, E.; Schneider, P.; Gadura, N.; Lieberman, D.; Cheung, T.

    2010-09-01

    The Zn-metalloprotease family contains conserved amino acid structures such that the nucleotide fluctuation at the DNA level would exhibit correlated randomness as described by fractal dimension. A nucleotide sequence fractal dimension can be calculated from a numerical series consisting of the atomic numbers of each nucleotide. The structure's vibration modes can also be studied using a Gaussian Network Model. The vibration measure and fractal dimension values form a two-dimensional plot with a standard vector metric that can be used for comparison of structures. The preference for amino acid usage in extremophiles may suppress nucleotide fluctuations that could be analyzed in terms of fractal dimension and Shannon entropy. A protein level cold adaptation study of the thermolysin Zn-metalloprotease family using molecular dynamics simulation was reported recently and our results show that the associated nucleotide fluctuation suppression is consistent with a regression pattern generated from the sequences's fractal dimension and entropy values (R-square { 0.98, N =5). It was observed that cold adaptation selected for high entropy and low fractal dimension values. Extension to the Archaemetzincin M54 family in extremophiles reveals a similar regression pattern (R-square = 0.98, N = 6). It was observed that the metalloprotease sequences of extremely halophilic organisms possess high fractal dimension and low entropy values as compared with non-halophiles. The zinc atom is usually bonded to the histidine residue, which shows limited levels of vibration in the Gaussian Network Model. The variability of the fractal dimension and entropy for a given protein structure suggests that extremophiles would have evolved after mesophiles, consistent with the bias usage of non-prebiotic amino acids by extremophiles. It may be argued that extremophiles have the capacity to offer extinction protection during drastic changes in astrobiological environments.

  20. Experimental Evaluation of the High-Speed Motion Vector Measurement by Combining Synthetic Aperture Array Processing with Constrained Least Square Method

    NASA Astrophysics Data System (ADS)

    Yokoyama, Ryouta; Yagi, Shin-ichi; Tamura, Kiyoshi; Sato, Masakazu

    2009-07-01

    Ultrahigh speed dynamic elastography has promising potential capabilities in applying clinical diagnosis and therapy of living soft tissues. In order to realize the ultrahigh speed motion tracking at speeds of over thousand frames per second, synthetic aperture (SA) array signal processing technology must be introduced. Furthermore, the overall system performance should overcome the fine quantitative evaluation in accuracy and variance of echo phase changes distributed across a tissue medium. On spatial evaluation of local phase changes caused by pulsed excitation on a tissue phantom, investigation was made with the proposed SA signal system utilizing different virtual point sources that were generated by an array transducer to probe each component of local tissue displacement vectors. The final results derived from the cross-correlation method (CCM) brought about almost the same performance as obtained by the constrained least square method (LSM) extended to successive echo frames. These frames were reconstructed by SA processing after the real-time acquisition triggered by the pulsed irradiation from a point source. The continuous behavior of spatial motion vectors demonstrated the dynamic generation and traveling of the pulsed shear wave at a speed of one thousand frames per second.

Top