Sample records for vector quantization based

  1. Segmentation of magnetic resonance images using fuzzy algorithms for learning vector quantization.

    PubMed

    Karayiannis, N B; Pai, P I

    1999-02-01

    This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities.

  2. Image Coding Based on Address Vector Quantization.

    NASA Astrophysics Data System (ADS)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.

  3. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  4. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  5. Perceptual compression of magnitude-detected synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Werness, Susan A.

    1994-01-01

    A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.

  6. Justification of Fuzzy Declustering Vector Quantization Modeling in Classification of Genotype-Image Phenotypes

    NASA Astrophysics Data System (ADS)

    Ng, Theam Foo; Pham, Tuan D.; Zhou, Xiaobo

    2010-01-01

    With the fast development of multi-dimensional data compression and pattern classification techniques, vector quantization (VQ) has become a system that allows large reduction of data storage and computational effort. One of the most recent VQ techniques that handle the poor estimation of vector centroids due to biased data from undersampling is to use fuzzy declustering-based vector quantization (FDVQ) technique. Therefore, in this paper, we are motivated to propose a justification of FDVQ based hidden Markov model (HMM) for investigating its effectiveness and efficiency in classification of genotype-image phenotypes. The performance evaluation and comparison of the recognition accuracy between a proposed FDVQ based HMM (FDVQ-HMM) and a well-known LBG (Linde, Buzo, Gray) vector quantization based HMM (LBG-HMM) will be carried out. The experimental results show that the performances of both FDVQ-HMM and LBG-HMM are almost similar. Finally, we have justified the competitiveness of FDVQ-HMM in classification of cellular phenotype image database by using hypotheses t-test. As a result, we have validated that the FDVQ algorithm is a robust and an efficient classification technique in the application of RNAi genome-wide screening image data.

  7. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  8. An adaptive vector quantization scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1990-01-01

    Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.

  9. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  10. Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

    PubMed

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  11. Vector quantizer designs for joint compression and terrain categorization of multispectral imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Lyons, Daniel F.

    1994-01-01

    Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.

  12. Vector Quantization Algorithm Based on Associative Memories

    NASA Astrophysics Data System (ADS)

    Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo

    This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.

  13. A recursive technique for adaptive vector quantization

    NASA Technical Reports Server (NTRS)

    Lindsay, Robert A.

    1989-01-01

    Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.

  14. Efficient storage and management of radiographic images using a novel wavelet-based multiscale vector quantizer

    NASA Astrophysics Data System (ADS)

    Yang, Shuyu; Mitra, Sunanda

    2002-05-01

    Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.

  15. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design

    PubMed Central

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-01-01

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061

  16. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.

    PubMed

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-11-23

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.

  17. Necessary conditions for the optimality of variable rate residual vector quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.

  18. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  19. Magnetic resonance image compression using scalar-vector quantization

    NASA Astrophysics Data System (ADS)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  20. Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.

    PubMed

    Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann

    2017-01-01

    Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.

  1. Vector quantization

    NASA Technical Reports Server (NTRS)

    Gray, Robert M.

    1989-01-01

    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.

  2. Robust vector quantization for noisy channels

    NASA Technical Reports Server (NTRS)

    Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.

    1988-01-01

    The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.

  3. A hybrid LBG/lattice vector quantizer for high quality image coding

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, V.; Sayood, K.; Arikan, E. (Editor)

    1991-01-01

    It is well known that a vector quantizer is an efficient coder offering a good trade-off between quantization distortion and bit rate. The performance of a vector quantizer asymptotically approaches the optimum bound with increasing dimensionality. A vector quantized image suffers from the following types of degradations: (1) edge regions in the coded image contain staircase effects, (2) quasi-constant or slowly varying regions suffer from contouring effects, and (3) textured regions lose details and suffer from granular noise. All three of these degradations are due to the finite size of the code book, the distortion measures used in the design, and due to the finite training procedure involved in the construction of the code book. In this paper, we present an adaptive technique which attempts to ameliorate the edge distortion and contouring effects.

  4. BSIFT: toward data-independent codebook for large scale image search.

    PubMed

    Zhou, Wengang; Li, Houqiang; Hong, Richang; Lu, Yijuan; Tian, Qi

    2015-03-01

    Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.

  5. Multipurpose image watermarking algorithm based on multistage vector quantization.

    PubMed

    Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He

    2005-06-01

    The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.

  6. Vector quantizer based on brightness maps for image compression with the polynomial transform

    NASA Astrophysics Data System (ADS)

    Escalante-Ramirez, Boris; Moreno-Gutierrez, Mauricio; Silvan-Cardenas, Jose L.

    2002-11-01

    We present a vector quantization scheme acting on brightness fields based on distance/distortion criteria correspondent with psycho-visual aspects. These criteria quantify sensorial distortion between vectors that represent either portions of a digital image or alternatively, coefficients of a transform-based coding system. In the latter case, we use an image representation model, namely the Hermite transform, that is based on some of the main perceptual characteristics of the human vision system (HVS) and in their response to light stimulus. Energy coding in the brightness domain, determination of local structure, code-book training and local orientation analysis are all obtained by means of the Hermite transform. This paper, for thematic reasons, is divided in four sections. The first one will shortly highlight the importance of having newer and better compression algorithms. This section will also serve to explain briefly the most relevant characteristics of the HVS, advantages and disadvantages related with the behavior of our vision in front of ocular stimulus. The second section shall go through a quick review of vector quantization techniques, focusing their performance on image treatment, as a preview for the image vector quantizer compressor actually constructed in section 5. Third chapter was chosen to concentrate the most important data gathered on brightness models. The building of this so-called brightness maps (quantification of the human perception on the visible objects reflectance), in a bi-dimensional model, will be addressed here. The Hermite transform, a special case of polynomial transforms, and its usefulness, will be treated, in an applicable discrete form, in the fourth chapter. As we have learned from previous works 1, Hermite transform has showed to be a useful and practical solution to efficiently code the energy within an image block, deciding which kind of quantization is to be used upon them (whether scalar or vector). It will also be a unique tool to structurally classify the image block within a given lattice. This particular operation intends to be one of the main contributions of this work. The fifth section will fuse the proposals derived from the study of the three main topics- addressed in the last sections- in order to propose an image compression model that takes advantage of vector quantizers inside the brightness transformed domain to determine the most important structures, finding the energy distribution inside the Hermite domain. Sixth and last section will show some results obtained while testing the coding-decoding model. The guidelines to evaluate the image compressing performance were the compression ratio, SNR and psycho-visual quality. Some conclusions derived from the research and possible unexplored paths will be shown on this section as well.

  7. Spectrally efficient digitized radio-over-fiber system with k-means clustering-based multidimensional quantization.

    PubMed

    Zhang, Lu; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Popov, Sergei; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia

    2018-04-01

    We propose a spectrally efficient digitized radio-over-fiber (D-RoF) system by grouping highly correlated neighboring samples of the analog signals into multidimensional vectors, where the k-means clustering algorithm is adopted for adaptive quantization. A 30  Gbit/s D-RoF system is experimentally demonstrated to validate the proposed scheme, reporting a carrier aggregation of up to 40 100 MHz orthogonal frequency division multiplexing (OFDM) channels with quadrate amplitude modulation (QAM) order of 4 and an aggregation of 10 100 MHz OFDM channels with a QAM order of 16384. The equivalent common public radio interface rates from 37 to 150  Gbit/s are supported. Besides, the error vector magnitude (EVM) of 8% is achieved with the number of quantization bits of 4, and the EVM can be further reduced to 1% by increasing the number of quantization bits to 7. Compared with conventional pulse coding modulation-based D-RoF systems, the proposed D-RoF system improves the signal-to-noise-ratio up to ∼9  dB and greatly reduces the EVM, given the same number of quantization bits.

  8. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  9. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  10. Quantization of Electromagnetic Fields in Cavities

    NASA Technical Reports Server (NTRS)

    Kakazu, Kiyotaka; Oshiro, Kazunori

    1996-01-01

    A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.

  11. Distance learning in discriminative vector quantization.

    PubMed

    Schneider, Petra; Biehl, Michael; Hammer, Barbara

    2009-10-01

    Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.

  12. Automatic detection of voice impairments by means of short-term cepstral parameters and neural network based detectors.

    PubMed

    Godino-Llorente, J I; Gómez-Vilda, P

    2004-02-01

    It is well known that vocal and voice diseases do not necessarily cause perceptible changes in the acoustic voice signal. Acoustic analysis is a useful tool to diagnose voice diseases being a complementary technique to other methods based on direct observation of the vocal folds by laryngoscopy. Through the present paper two neural-network based classification approaches applied to the automatic detection of voice disorders will be studied. Structures studied are multilayer perceptron and learning vector quantization fed using short-term vectors calculated accordingly to the well-known Mel Frequency Coefficient cepstral parameterization. The paper shows that these architectures allow the detection of voice disorders--including glottic cancer--under highly reliable conditions. Within this context, the Learning Vector quantization methodology demonstrated to be more reliable than the multilayer perceptron architecture yielding 96% frame accuracy under similar working conditions.

  13. A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization

    NASA Astrophysics Data System (ADS)

    Binz, Ernst; Pods, Sonja

    2006-01-01

    In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group HX∞. A representation of the C*-group algebra of HX∞ is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.

  14. Soft learning vector quantization and clustering algorithms based on ordered weighted aggregation operators.

    PubMed

    Karayiannis, N B

    2000-01-01

    This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.

  15. Low bit rate coding of Earth science images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1993-01-01

    In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.

  16. Vector quantization for efficient coding of upper subbands

    NASA Technical Reports Server (NTRS)

    Zeng, W. J.; Huang, Y. F.

    1994-01-01

    This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.

  17. Combining Vector Quantization and Histogram Equalization.

    ERIC Educational Resources Information Center

    Cosman, Pamela C.; And Others

    1992-01-01

    Discussion of contrast enhancement techniques focuses on the use of histogram equalization with a data compression technique, i.e., tree-structured vector quantization. The enhancement technique of intensity windowing is described, and the use of enhancement techniques for medical images is explained, including adaptive histogram equalization.…

  18. Face recognition algorithm using extended vector quantization histogram features.

    PubMed

    Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu

    2018-01-01

    In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.

  19. Detecting double compressed MPEG videos with the same quantization matrix and synchronized group of pictures structure

    NASA Astrophysics Data System (ADS)

    Aghamaleki, Javad Abbasi; Behrad, Alireza

    2018-01-01

    Double compression detection is a crucial stage in digital image and video forensics. However, the detection of double compressed videos is challenging when the video forger uses the same quantization matrix and synchronized group of pictures (GOP) structure during the recompression history to conceal tampering effects. A passive approach is proposed for detecting double compressed MPEG videos with the same quantization matrix and synchronized GOP structure. To devise the proposed algorithm, the effects of recompression on P frames are mathematically studied. Then, based on the obtained guidelines, a feature vector is proposed to detect double compressed frames on the GOP level. Subsequently, sparse representations of the feature vectors are used for dimensionality reduction and enrich the traces of recompression. Finally, a support vector machine classifier is employed to detect and localize double compression in temporal domain. The experimental results show that the proposed algorithm achieves the accuracy of more than 95%. In addition, the comparisons of the results of the proposed method with those of other methods reveal the efficiency of the proposed algorithm.

  20. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    NASA Astrophysics Data System (ADS)

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  1. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming

    2016-01-01

    We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research.

  2. A fingerprint key binding algorithm based on vector quantization and error correction

    NASA Astrophysics Data System (ADS)

    Li, Liang; Wang, Qian; Lv, Ke; He, Ning

    2012-04-01

    In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.

  3. Efficient boundary hunting via vector quantization

    NASA Astrophysics Data System (ADS)

    Diamantini, Claudia; Panti, Maurizio

    2001-03-01

    A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.

  4. Quantized kernel least mean square algorithm.

    PubMed

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  5. Quantized Vector Potential and the Photon Wave-function

    NASA Astrophysics Data System (ADS)

    Meis, C.; Dahoo, P. R.

    2017-12-01

    The vector potential function {\\overrightarrow{α }}kλ (\\overrightarrow{r},t) for a k-mode and λ-polarization photon, with the quantized amplitude α 0k (ω k ) = ξω k , satisfies the classical wave propagation equation as well as the Schrodinger’s equation with the relativistic massless Hamiltonian \\mathop{H}\\limits∼ =-i\\hslash c\\overrightarrow{\

  6. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  7. Conditional Entropy-Constrained Residual VQ with Application to Image Coding

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1996-01-01

    This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.

  8. A VLSI chip set for real time vector quantization of image sequences

    NASA Technical Reports Server (NTRS)

    Baker, Richard L.

    1989-01-01

    The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.

  9. Quantized Spectral Compressed Sensing: Cramer–Rao Bounds and Recovery Algorithms

    NASA Astrophysics Data System (ADS)

    Fu, Haoyu; Chi, Yuejie

    2018-06-01

    Efficient estimation of wideband spectrum is of great importance for applications such as cognitive radio. Recently, sub-Nyquist sampling schemes based on compressed sensing have been proposed to greatly reduce the sampling rate. However, the important issue of quantization has not been fully addressed, particularly for high-resolution spectrum and parameter estimation. In this paper, we aim to recover spectrally-sparse signals and the corresponding parameters, such as frequency and amplitudes, from heavy quantizations of their noisy complex-valued random linear measurements, e.g. only the quadrant information. We first characterize the Cramer-Rao bound under Gaussian noise, which highlights the trade-off between sample complexity and bit depth under different signal-to-noise ratios for a fixed budget of bits. Next, we propose a new algorithm based on atomic norm soft thresholding for signal recovery, which is equivalent to proximal mapping of properly designed surrogate signals with respect to the atomic norm that motivates spectral sparsity. The proposed algorithm can be applied to both the single measurement vector case, as well as the multiple measurement vector case. It is shown that under the Gaussian measurement model, the spectral signals can be reconstructed accurately with high probability, as soon as the number of quantized measurements exceeds the order of K log n, where K is the level of spectral sparsity and $n$ is the signal dimension. Finally, numerical simulations are provided to validate the proposed approaches.

  10. High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen

    1995-01-01

    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.

  11. Subband directional vector quantization in radiological image compression

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  12. Robust fault tolerant control based on sliding mode method for uncertain linear systems with quantization.

    PubMed

    Hao, Li-Ying; Yang, Guang-Hong

    2013-09-01

    This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  14. Symplectic Quantization of a Vector-Tensor Gauge Theory with Topological Coupling

    NASA Astrophysics Data System (ADS)

    Barcelos-Neto, J.; Silva, M. B. D.

    We use the symplectic formalism to quantize a gauge theory where vectors and tensors fields are coupled in a topological way. This is an example of reducible theory and a procedure like of ghosts-of-ghosts of the BFV method is applied but in terms of Lagrange multipliers. Our final results are in agreement with the ones found in the literature by using the Dirac method.

  15. VLSI realization of learning vector quantization with hardware/software co-design for different applications

    NASA Astrophysics Data System (ADS)

    An, Fengwei; Akazawa, Toshinobu; Yamasaki, Shogo; Chen, Lei; Jürgen Mattausch, Hans

    2015-04-01

    This paper reports a VLSI realization of learning vector quantization (LVQ) with high flexibility for different applications. It is based on a hardware/software (HW/SW) co-design concept for on-chip learning and recognition and designed as a SoC in 180 nm CMOS. The time consuming nearest Euclidean distance search in the LVQ algorithm’s competition layer is efficiently implemented as a pipeline with parallel p-word input. Since neuron number in the competition layer, weight values, input and output number are scalable, the requirements of many different applications can be satisfied without hardware changes. Classification of a d-dimensional input vector is completed in n × \\lceil d/p \\rceil + R clock cycles, where R is the pipeline depth, and n is the number of reference feature vectors (FVs). Adjustment of stored reference FVs during learning is done by the embedded 32-bit RISC CPU, because this operation is not time critical. The high flexibility is verified by the application of human detection with different numbers for the dimensionality of the FVs.

  16. Multi-mode energy management strategy for fuel cell electric vehicles based on driving pattern identification using learning vector quantization neural network algorithm

    NASA Astrophysics Data System (ADS)

    Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong

    2018-06-01

    The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.

  17. Cross-entropy embedding of high-dimensional data using the neural gas model.

    PubMed

    Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi

    2005-01-01

    A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).

  18. Quantized Overcomplete Expansions: Analysis, Synthesis and Algorithms

    DTIC Science & Technology

    1995-07-01

    would be in the spirit of the Lempel - Ziv algorithm . The decoder would have to be aware of changes in the dictionary, but depending on the nature of the...37 3.4 A General Vector Compression Algorithm Based on Frames : : : : : : : : : : 40 ii 3.4.1 Design Considerations...x3.3. Along with exploring general properties of matching pursuit, we are interested in its application to compressing data vectors in RN. A general

  19. A constrained joint source/channel coder design and vector quantization of nonstationary sources

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.

    1993-01-01

    The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.

  20. Radial quantization of the 3d CFT and the higher spin/vector model duality

    NASA Astrophysics Data System (ADS)

    Hu, Shan; Li, Tianjun

    2014-10-01

    We study the radial quantization of the 3dO(N) vector model. We calculate the higher spin charges whose commutation relations give the higher spin algebra. The Fock states of higher spin gravity in AdS4 are realized as the states in the 3d CFT. The dynamical information is encoded in their inner products. This serves as the simplest explicit demonstration of the CFT definition for the quantum gravity.

  1. Wavelet subband coding of computer simulation output using the A++ array class library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.

    1995-07-01

    The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using amore » bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.« less

  2. Speech coding at low to medium bit rates

    NASA Astrophysics Data System (ADS)

    Leblanc, Wilfred Paul

    1992-09-01

    Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.

  3. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  4. Image coding using entropy-constrained residual vector quantization

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.

  5. FIVQ algorithm for interference hyper-spectral image compression

    NASA Astrophysics Data System (ADS)

    Wen, Jia; Ma, Caiwen; Zhao, Junsuo

    2014-07-01

    Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.

  6. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    PubMed

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  7. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  8. Associative Pattern Recognition In Analog VLSI Circuits

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1995-01-01

    Winner-take-all circuit selects best-match stored pattern. Prototype cascadable very-large-scale integrated (VLSI) circuit chips built and tested to demonstrate concept of electronic associative pattern recognition. Based on low-power, sub-threshold analog complementary oxide/semiconductor (CMOS) VLSI circuitry, each chip can store 128 sets (vectors) of 16 analog values (vector components), vectors representing known patterns as diverse as spectra, histograms, graphs, or brightnesses of pixels in images. Chips exploit parallel nature of vector quantization architecture to implement highly parallel processing in relatively simple computational cells. Through collective action, cells classify input pattern in fraction of microsecond while consuming power of few microwatts.

  9. Physics-based Detection of Subpixel Targets in Hyperspectral Imagery

    DTIC Science & Technology

    2007-01-01

    Learning Vector Quantization LWIR ...Wave Infrared ( LWIR ) from 7.0 to 15.0 microns regions as well. At these wavelengths, emissivity dominates the spectral signature. Emissivity is...object emits instead of reflects. Initial work has already been finished applying the hybrid detectors to LWIR sensors [13]. However, target

  10. Progressive Vector Quantization on a massively parallel SIMD machine with application to multispectral image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.

  11. Visual data mining for quantized spatial data

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Kahn, Brian

    2004-01-01

    In previous papers we've shown how a well known data compression algorithm called Entropy-constrained Vector Quantization ( can be modified to reduce the size and complexity of very large, satellite data sets. In this paper, we descuss how to visualize and understand the content of such reduced data sets.

  12. Topological charge quantization via path integration: An application of the Kustaanheimo-Stiefel transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inomata, A.; Junker, G.; Wilson, R.

    1993-08-01

    The unified treatment of the Dirac monopole, the Schwinger monopole, and the Aharonov-Bahn problem by Barut and Wilson is revisited via a path integral approach. The Kustaanheimo-Stiefel transformation of space and time is utilized to calculate the path integral for a charged particle in the singular vector potential. In the process of dimensional reduction, a topological charge quantization rule is derived, which contains Dirac's quantization condition as a special case. 32 refs.

  13. Spatially Invariant Vector Quantization: A pattern matching algorithm for multiple classes of image subject matter including pathology.

    PubMed

    Hipp, Jason D; Cheng, Jerome Y; Toner, Mehmet; Tompkins, Ronald G; Balis, Ulysses J

    2011-02-26

    HISTORICALLY, EFFECTIVE CLINICAL UTILIZATION OF IMAGE ANALYSIS AND PATTERN RECOGNITION ALGORITHMS IN PATHOLOGY HAS BEEN HAMPERED BY TWO CRITICAL LIMITATIONS: 1) the availability of digital whole slide imagery data sets and 2) a relative domain knowledge deficit in terms of application of such algorithms, on the part of practicing pathologists. With the advent of the recent and rapid adoption of whole slide imaging solutions, the former limitation has been largely resolved. However, with the expectation that it is unlikely for the general cohort of contemporary pathologists to gain advanced image analysis skills in the short term, the latter problem remains, thus underscoring the need for a class of algorithm that has the concurrent properties of image domain (or organ system) independence and extreme ease of use, without the need for specialized training or expertise. In this report, we present a novel, general case pattern recognition algorithm, Spatially Invariant Vector Quantization (SIVQ), that overcomes the aforementioned knowledge deficit. Fundamentally based on conventional Vector Quantization (VQ) pattern recognition approaches, SIVQ gains its superior performance and essentially zero-training workflow model from its use of ring vectors, which exhibit continuous symmetry, as opposed to square or rectangular vectors, which do not. By use of the stochastic matching properties inherent in continuous symmetry, a single ring vector can exhibit as much as a millionfold improvement in matching possibilities, as opposed to conventional VQ vectors. SIVQ was utilized to demonstrate rapid and highly precise pattern recognition capability in a broad range of gross and microscopic use-case settings. With the performance of SIVQ observed thus far, we find evidence that indeed there exist classes of image analysis/pattern recognition algorithms suitable for deployment in settings where pathologists alone can effectively incorporate their use into clinical workflow, as a turnkey solution. We anticipate that SIVQ, and other related class-independent pattern recognition algorithms, will become part of the overall armamentarium of digital image analysis approaches that are immediately available to practicing pathologists, without the need for the immediate availability of an image analysis expert.

  14. A new local-global approach for classification.

    PubMed

    Peres, R T; Pedreira, C E

    2010-09-01

    In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.

  15. Learning vector quantization neural networks improve accuracy of transcranial color-coded duplex sonography in detection of middle cerebral artery spasm--preliminary report.

    PubMed

    Swiercz, Miroslaw; Kochanowicz, Jan; Weigele, John; Hurst, Robert; Liebeskind, David S; Mariak, Zenon; Melhem, Elias R; Krejza, Jaroslaw

    2008-01-01

    To determine the performance of an artificial neural network in transcranial color-coded duplex sonography (TCCS) diagnosis of middle cerebral artery (MCA) spasm. TCCS was prospectively acquired within 2 h prior to routine cerebral angiography in 100 consecutive patients (54M:46F, median age 50 years). Angiographic MCA vasospasm was classified as mild (<25% of vessel caliber reduction), moderate (25-50%), or severe (>50%). A Learning Vector Quantization neural network classified MCA spasm based on TCCS peak-systolic, mean, and end-diastolic velocity data. During a four-class discrimination task, accurate classification by the network ranged from 64.9% to 72.3%, depending on the number of neurons in the Kohonen layer. Accurate classification of vasospasm ranged from 79.6% to 87.6%, with an accuracy of 84.7% to 92.1% for the detection of moderate-to-severe vasospasm. An artificial neural network may increase the accuracy of TCCS in diagnosis of MCA spasm.

  16. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  17. On Fock-space representations of quantized enveloping algebras related to noncommutative differential geometry

    NASA Astrophysics Data System (ADS)

    Jurčo, B.; Schlieker, M.

    1995-07-01

    In this paper explicitly natural (from the geometrical point of view) Fock-space representations (contragradient Verma modules) of the quantized enveloping algebras are constructed. In order to do so, one starts from the Gauss decomposition of the quantum group and introduces the differential operators on the corresponding q-deformed flag manifold (assumed as a left comodule for the quantum group) by a projection to it of the right action of the quantized enveloping algebra on the quantum group. Finally, the representatives of the elements of the quantized enveloping algebra corresponding to the left-invariant vector fields on the quantum group are expressed as first-order differential operators on the q-deformed flag manifold.

  18. Wavelet Transforms in Parallel Image Processing

    DTIC Science & Technology

    1994-01-27

    NUMBER OF PAGES Object Segmentation, Texture Segmentation, Image Compression, Image 137 Halftoning , Neural Network, Parallel Algorithms, 2D and 3D...Vector Quantization of Wavelet Transform Coefficients ........ ............................. 57 B.1.f Adaptive Image Halftoning based on Wavelet...application has been directed to the adaptive image halftoning . The gray information at a pixel, including its gray value and gradient, is represented by

  19. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    NASA Astrophysics Data System (ADS)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  20. Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets.

    PubMed

    Demartines, P; Herault, J

    1997-01-01

    We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.

  1. Using a binaural biomimetic array to identify bottom objects ensonified by echolocating dolphins

    USGS Publications Warehouse

    Heiweg, D.A.; Moore, P.W.; Martin, S.W.; Dankiewicz, L.A.

    2006-01-01

    The development of a unique dolphin biomimetic sonar produced data that were used to study signal processing methods for object identification. Echoes from four metallic objects proud on the bottom, and a substrate-only condition, were generated by bottlenose dolphins trained to ensonify the targets in very shallow water. Using the two-element ('binaural') receive array, object echo spectra were collected and submitted for identification to four neural network architectures. Identification accuracy was evaluated over two receive array configurations, and five signal processing schemes. The four neural networks included backpropagation, learning vector quantization, genetic learning and probabilistic network architectures. The processing schemes included four methods that capitalized on the binaural data, plus a monaural benchmark process. All the schemes resulted in above-chance identification accuracy when applied to learning vector quantization and backpropagation. Beam-forming or concatenation of spectra from both receive elements outperformed the monaural benchmark, with higher sensitivity and lower bias. Ultimately, best object identification performance was achieved by the learning vector quantization network supplied with beam-formed data. The advantages of multi-element signal processing for object identification are clearly demonstrated in this development of a first-ever dolphin biomimetic sonar. ?? 2006 IOP Publishing Ltd.

  2. Comparison of SOM point densities based on different criteria.

    PubMed

    Kohonen, T

    1999-11-15

    Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.

  3. Pipeline synthetic aperture radar data compression utilizing systolic binary tree-searched architecture for vector quantization

    NASA Technical Reports Server (NTRS)

    Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)

    1995-01-01

    A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.

  4. Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen

    2018-07-01

    Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).

  5. Application of Classification Models to Pharyngeal High-Resolution Manometry

    ERIC Educational Resources Information Center

    Mielens, Jason D.; Hoffman, Matthew R.; Ciucci, Michelle R.; McCulloch, Timothy M.; Jiang, Jack J.

    2012-01-01

    Purpose: The authors present 3 methods of performing pattern recognition on spatiotemporal plots produced by pharyngeal high-resolution manometry (HRM). Method: Classification models, including the artificial neural networks (ANNs) multilayer perceptron (MLP) and learning vector quantization (LVQ), as well as support vector machines (SVM), were…

  6. Sea ice motion from low-resolution satellite sensors: An alternative method and its validation in the Arctic

    NASA Astrophysics Data System (ADS)

    Lavergne, T.; Eastwood, S.; Teffah, Z.; Schyberg, H.; Breivik, L.-A.

    2010-10-01

    The retrieval of sea ice motion with the Maximum Cross-Correlation (MCC) method from low-resolution (10-15 km) spaceborne imaging sensors is challenged by a dominating quantization noise as the time span of displacement vectors is shortened. To allow investigating shorter displacements from these instruments, we introduce an alternative sea ice motion tracking algorithm that builds on the MCC method but relies on a continuous optimization step for computing the motion vector. The prime effect of this method is to effectively dampen the quantization noise, an artifact of the MCC. It allows for retrieving spatially smooth 48 h sea ice motion vector fields in the Arctic. Strategies to detect and correct erroneous vectors as well as to optimally merge several polarization channels of a given instrument are also described. A test processing chain is implemented and run with several active and passive microwave imagers (Advanced Microwave Scanning Radiometer-EOS (AMSR-E), Special Sensor Microwave Imager, and Advanced Scatterometer) during three Arctic autumn, winter, and spring seasons. Ice motion vectors are collocated to and compared with GPS positions of in situ drifters. Error statistics are shown to be ranging from 2.5 to 4.5 km (standard deviation for components of the vectors) depending on the sensor, without significant bias. We discuss the relative contribution of measurement and representativeness errors by analyzing monthly validation statistics. The 37 GHz channels of the AMSR-E instrument allow for the best validation statistics. The operational low-resolution sea ice drift product of the EUMETSAT OSI SAF (European Organisation for the Exploitation of Meteorological Satellites Ocean and Sea Ice Satellite Application Facility) is based on the algorithms presented in this paper.

  7. Vector coding of wavelet-transformed images

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Zhi, Cheng; Zhou, Yuanhua

    1998-09-01

    Wavelet, as a brand new tool in signal processing, has got broad recognition. Using wavelet transform, we can get octave divided frequency band with specific orientation which combines well with the properties of Human Visual System. In this paper, we discuss the classified vector quantization method for multiresolution represented image.

  8. Permutation modulation for quantization and information reconciliation in CV-QKD systems

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar

    2017-08-01

    This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal to Noise Ratio (SNR) exasperating the problem. Here we propose to use Permutation Modulation (PM) as a means of quantization of Gaussian vectors at Alice and Bob over a d-dimensional space with d ≫ 1. The goal is to achieve the necessary coding efficiency to extend the achievable range of continuous variable QKD by quantizing over larger and larger dimensions. Fractional bit rate per sample is easily achieved using PM at very reasonable computational cost. Ordered statistics is used extensively throughout the development from generation of the seed vector in PM to analysis of error rates associated with the signs of the Gaussian samples at Alice and Bob as a function of the magnitude of the observed samples at Bob.

  9. Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors

    DTIC Science & Technology

    2011-04-15

    funded by Mitsubishi Electric Research Laboratories. †ICTEAM Institute, ELEN Department, Université catholique de Louvain (UCL), B-1348 Louvain-la-Neuve...reduced to a simple comparator that tests for values above or below zero, enabling extremely simple, efficient, and fast quantization. A 1-bit quantizer is...these two terms appears to be significantly different, according to the previously discussed experiments. To test the hypothesis that this term is the key

  10. Global synchronization of complex dynamical networks through digital communication with limited data rate.

    PubMed

    Wang, Yan-Wu; Bian, Tao; Xiao, Jiang-Wen; Wen, Changyun

    2015-10-01

    This paper studies the global synchronization of complex dynamical network (CDN) under digital communication with limited bandwidth. To realize the digital communication, the so-called uniform-quantizer-sets are introduced to quantize the states of nodes, which are then encoded and decoded by newly designed encoders and decoders. To meet the requirement of the bandwidth constraint, a scaling function is utilized to guarantee the quantizers having bounded inputs and thus achieving bounded real-time quantization levels. Moreover, a new type of vector norm is introduced to simplify the expression of the bandwidth limit. Through mathematical induction, a sufficient condition is derived to ensure global synchronization of the CDNs. The lower bound on the sum of the real-time quantization levels is analyzed for different cases. Optimization method is employed to relax the requirements on the network topology and to determine the minimum of such lower bound for each case, respectively. Simulation examples are also presented to illustrate the established results.

  11. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  12. Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!

    NASA Astrophysics Data System (ADS)

    Nutku, Yavuz

    2003-07-01

    Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems.

  13. Evaluation of Raman spectra of human brain tumor tissue using the learning vector quantization neural network

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong

    2016-05-01

    The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.

  14. Linear time relational prototype based learning.

    PubMed

    Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara

    2012-10-01

    Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.

  15. Reduced Order Podolsky Model

    NASA Astrophysics Data System (ADS)

    Thibes, Ronaldo

    2017-02-01

    We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution, we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.

  16. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  17. Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Meadows, Steven

    1997-10-01

    Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.

  18. Broad Absorption Line Quasar catalogues with Supervised Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scaringi, Simone; Knigge, Christian; Cottis, Christopher E.

    2008-12-05

    We have applied a Learning Vector Quantization (LVQ) algorithm to SDSS DR5 quasar spectra in order to create a large catalogue of broad absorption line quasars (BALQSOs). We first discuss the problems with BALQSO catalogues constructed using the conventional balnicity and/or absorption indices (BI and AI), and then describe the supervised LVQ network we have trained to recognise BALQSOs. The resulting BALQSO catalogue should be substantially more robust and complete than BI-or AI-based ones.

  19. Research on conceptual/innovative design for the life cycle

    NASA Technical Reports Server (NTRS)

    Cagan, Jonathan; Agogino, Alice M.

    1990-01-01

    The goal of this research is developing and integrating qualitative and quantitative methods for life cycle design. The definition of the problem includes formal computer-based methods limited to final detailing stages of design; CAD data bases do not capture design intent or design history; and life cycle issues were ignored during early stages of design. Viewgraphs outline research in conceptual design; the SYMON (SYmbolic MONotonicity analyzer) algorithm; multistart vector quantization optimization algorithm; intelligent manufacturing: IDES - Influence Diagram Architecture; and 1st PRINCE (FIRST PRINciple Computational Evaluator).

  20. Coherent states for the relativistic harmonic oscillator

    NASA Technical Reports Server (NTRS)

    Aldaya, Victor; Guerrero, J.

    1995-01-01

    Recently we have obtained, on the basis of a group approach to quantization, a Bargmann-Fock-like realization of the Relativistic Harmonic Oscillator as well as a generalized Bargmann transform relating fock wave functions and a set of relativistic Hermite polynomials. Nevertheless, the relativistic creation and annihilation operators satisfy typical relativistic commutation relations of the Lie product (vector-z, vector-z(sup dagger)) approximately equals Energy (an SL(2,R) algebra). Here we find higher-order polarization operators on the SL(2,R) group, providing canonical creation and annihilation operators satisfying the Lie product (vector-a, vector-a(sup dagger)) = identity vector 1, the eigenstates of which are 'true' coherent states.

  1. On the Problem of Bandwidth Partitioning in FDD Block-Fading Single-User MISO/SIMO Systems

    NASA Astrophysics Data System (ADS)

    Ivrlač, Michel T.; Nossek, Josef A.

    2008-12-01

    We report on our research activity on the problem of how to optimally partition the available bandwidth of frequency division duplex, multi-input single-output communication systems, into subbands for the uplink, the downlink, and the feedback. In the downlink, the transmitter applies coherent beamforming based on quantized channel information which is obtained by feedback from the receiver. As feedback takes away resources from the uplink, which could otherwise be used to transfer payload data, it is highly desirable to reserve the "right" amount of uplink resources for the feedback. Under the assumption of random vector quantization, and a frequency flat, independent and identically distributed block-fading channel, we derive closed-form expressions for both the feedback quantization and bandwidth partitioning which jointly maximize the sum of the average payload data rates of the downlink and the uplink. While we do introduce some approximations to facilitate mathematical tractability, the analytical solution is asymptotically exact as the number of antennas approaches infinity, while for systems with few antennas, it turns out to be a fairly accurate approximation. In this way, the obtained results are meaningful for practical communication systems, which usually can only employ a few antennas.

  2. Radiation and matter: Electrodynamics postulates and Lorenz gauge

    NASA Astrophysics Data System (ADS)

    Bobrov, V. B.; Trigger, S. A.; van Heijst, G. J.; Schram, P. P.

    2016-11-01

    In general terms, we have considered matter as the system of charged particles and quantized electromagnetic field. For consistent description of the thermodynamic properties of matter, especially in an extreme state, the problem of quantization of the longitudinal and scalar potentials should be solved. In this connection, we pay attention that the traditional postulates of electrodynamics, which claim that only electric and magnetic fields are observable, is resolved by denial of the statement about validity of the Maxwell equations for microscopic fields. The Maxwell equations, as the generalization of experimental data, are valid only for averaged values. We show that microscopic electrodynamics may be based on postulation of the d'Alembert equations for four-vector of the electromagnetic field potential. The Lorenz gauge is valid for the averages potentials (and provides the implementation of the Maxwell equations for averages). The suggested concept overcomes difficulties under the electromagnetic field quantization procedure being in accordance with the results of quantum electrodynamics. As a result, longitudinal and scalar photons become real rather than virtual and may be observed in principle. The longitudinal and scalar photons provide not only the Coulomb interaction of charged particles, but also allow the electrical Aharonov-Bohm effect.

  3. Applications of wavelet-based compression to multidimensional Earth science data

    NASA Technical Reports Server (NTRS)

    Bradley, Jonathan N.; Brislawn, Christopher M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  4. Vector/Matrix Quantization for Narrow-Bandwidth Digital Speech Compression.

    DTIC Science & Technology

    1982-09-01

    8217o 0 -X -u -vc "oi ’" o 0 00i MN nM I -r -: I I Ir , I C 64 ut c 4c -C ;6 19I *~I C’ I I I 1 Kall 9 I I V4 S.0 M r4) ** al Iw* 0 0 10* 0 f 65 signal...Prediction of the Speech Wave, JASA Vol. 50, pp. 637-655, April 1971 . - 2. I. Itakura and S. Saito, Analysis Synthesis Telephony Based Upon the Maximum

  5. Recursive optimal pruning with applications to tree structured vector quantizers

    NASA Technical Reports Server (NTRS)

    Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen

    1992-01-01

    A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.

  6. Direct Volume Rendering with Shading via Three-Dimensional Textures

    NASA Technical Reports Server (NTRS)

    VanGelder, Allen; Kim, Kwansik

    1996-01-01

    A new and easy-to-implement method for direct volume rendering that uses 3D texture maps for acceleration, and incorporates directional lighting, is described. The implementation, called Voltx, produces high-quality images at nearly interactive speeds on workstations with hardware support for three-dimensional texture maps. Previously reported methods did not incorporate a light model, and did not address issues of multiple texture maps for large volumes. Our research shows that these extensions impact performance by about a factor of ten. Voltx supports orthographic, perspective, and stereo views. This paper describes the theory and implementation of this technique, and compares it to the shear-warp factorization approach. A rectilinear data set is converted into a three-dimensional texture map containing color and opacity information. Quantized normal vectors and a lookup table provide efficiency. A new tesselation of the sphere is described, which serves as the basis for normal-vector quantization. A new gradient-based shading criterion is described, in which the gradient magnitude is interpreted in the context of the field-data value and the material classification parameters, and not in isolation. In the rendering phase, the texture map is applied to a stack of parallel planes, which effectively cut the texture into many slabs. The slabs are composited to form an image.

  7. Passive forensics for copy-move image forgery using a method based on DCT and SVD.

    PubMed

    Zhao, Jie; Guo, Jichang

    2013-12-10

    As powerful image editing tools are widely used, the demand for identifying the authenticity of an image is much increased. Copy-move forgery is one of the tampering techniques which are frequently used. Most existing techniques to expose this forgery need to improve the robustness for common post-processing operations and fail to precisely locate the tampering region especially when there are large similar or flat regions in the image. In this paper, a robust method based on DCT and SVD is proposed to detect this specific artifact. Firstly, the suspicious image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized by a quantization matrix to obtain a more robust representation of each block. Secondly, each quantized block is divided non-overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold. Experiment results demonstrate that our proposed method can effectively detect multiple copy-move forgery and precisely locate the duplicated regions, even when an image was distorted by Gaussian blurring, AWGN, JPEG compression and their mixed operations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Direct Images, Fields of Hilbert Spaces, and Geometric Quantization

    NASA Astrophysics Data System (ADS)

    Lempert, László; Szőke, Róbert

    2014-04-01

    Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.

  9. A robust hidden Markov Gauss mixture vector quantizer for a noisy source.

    PubMed

    Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M

    2009-07-01

    Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.

  10. Model-based VQ for image data archival, retrieval and distribution

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  11. Face biometrics with renewable templates

    NASA Astrophysics Data System (ADS)

    van der Veen, Michiel; Kevenaar, Tom; Schrijen, Geert-Jan; Akkermans, Ton H.; Zuo, Fei

    2006-02-01

    In recent literature, privacy protection technologies for biometric templates were proposed. Among these is the so-called helper-data system (HDS) based on reliable component selection. In this paper we integrate this approach with face biometrics such that we achieve a system in which the templates are privacy protected, and multiple templates can be derived from the same facial image for the purpose of template renewability. Extracting binary feature vectors forms an essential step in this process. Using the FERET and Caltech databases, we show that this quantization step does not significantly degrade the classification performance compared to, for example, traditional correlation-based classifiers. The binary feature vectors are integrated in the HDS leading to a privacy protected facial recognition algorithm with acceptable FAR and FRR, provided that the intra-class variation is sufficiently small. This suggests that a controlled enrollment procedure with a sufficient number of enrollment measurements is required.

  12. Quantization of high dimensional Gaussian vector using permutation modulation with application to information reconciliation in continuous variable QKD

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar

    This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.

  13. Studies on image compression and image reconstruction

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Nori, Sekhar; Araj, A.

    1994-01-01

    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  14. The research of "blind" spot in the LVQ network

    NASA Astrophysics Data System (ADS)

    Guo, Zhanjie; Nan, Shupo; Wang, Xiaoli

    2017-04-01

    Nowadays competitive neural network has been widely used in the pattern recognition, classification and other aspects, and show the great advantages compared with the traditional clustering methods. But the competitive neural networks still has inadequate in many aspects, and it needs to be further improved. Based on the learning Vector Quantization Network proposed by Learning Kohonen [1], this paper resolve the issue of the large training error, when there are "blind" spots in a network through the introduction of threshold value learning rules and finally programs the realization with Matlab.

  15. Optical systolic array processor using residue arithmetic

    NASA Technical Reports Server (NTRS)

    Jackson, J.; Casasent, D.

    1983-01-01

    The use of residue arithmetic to increase the accuracy and reduce the dynamic range requirements of optical matrix-vector processors is evaluated. It is determined that matrix-vector operations and iterative algorithms can be performed totally in residue notation. A new parallel residue quantizer circuit is developed which significantly improves the performance of the systolic array feedback processor. Results are presented of a computer simulation of this system used to solve a set of three simultaneous equations.

  16. Classification and Compression of Multi-Resolution Vectors: A Tree Structured Vector Quantizer Approach

    DTIC Science & Technology

    2002-01-01

    their expression profile and for classification of cells into tumerous and non- tumerous classes. Then we will present a parallel tree method for... cancerous cells. We will use the same dataset and use tree structured classifiers with multi-resolution analysis for classifying cancerous from non- cancerous ...cells. We have the expressions of 4096 genes from 98 different cell types. Of these 98, 72 are cancerous while 26 are non- cancerous . We are interested

  17. Image segmentation using hidden Markov Gauss mixture models.

    PubMed

    Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M

    2007-07-01

    Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.

  18. Quantization, Frobenius and Bi algebras from the Categorical Framework of Quantum Mechanics to Natural Language Semantics

    NASA Astrophysics Data System (ADS)

    Sadrzadeh, Mehrnoosh

    2017-07-01

    Compact Closed categories and Frobenius and Bi algebras have been applied to model and reason about Quantum protocols. The same constructions have also been applied to reason about natural language semantics under the name: ``categorical distributional compositional'' semantics, or in short, the ``DisCoCat'' model. This model combines the statistical vector models of word meaning with the compositional models of grammatical structure. It has been applied to natural language tasks such as disambiguation, paraphrasing and entailment of phrases and sentences. The passage from the grammatical structure to vectors is provided by a functor, similar to the Quantization functor of Quantum Field Theory. The original DisCoCat model only used compact closed categories. Later, Frobenius algebras were added to it to model long distance dependancies such as relative pronouns. Recently, bialgebras have been added to the pack to reason about quantifiers. This paper reviews these constructions and their application to natural language semantics. We go over the theory and present some of the core experimental results.

  19. Theory of the Quantized Hall Conductance in Periodic Systems: a Topological Analysis.

    NASA Astrophysics Data System (ADS)

    Czerwinski, Michael Joseph

    The integral quantization of the Hall conductance in two-dimensional periodic systems is investigated from a topological point of view. Attention is focused on the contributions from the electronic sub-bands which arise from perturbed Landau levels. After reviewing the theoretical work leading to the identification of the Hall conductance as a topological quantum number, both a determination and interpretation of these quantized values for the sub-band conductances is made. It is shown that the Hall conductance of each sub-band can be regarded as the sum of two terms which will be referred to as classical and nonclassical. Although each of these contributions individually leads to a fractional conductance, the sum of these two contributions does indeed yield an integer. These integral conductances are found to be given by the solution of a simple Diophantine equation which depends on the periodic perturbation. A connection between the quantized value of the Hall conductance and the covering of real space by the zeroes of the sub-band wavefunctions allows for a determination of these conductances under more general potentials. A method is described for obtaining the conductance values from only those states bordering the Brillouin zone, and not the states in its interior. This method is demonstrated to give Hall conductances in agreement with those obtained from the Diophantine equation for the sinusoidal potential case explored earlier. Generalizing a simple gauge invariance argument from real space to k-space, a k-space 'vector potential' is introduced. This allows for a explicit identification of the Hall conductance with the phase winding number of the sub-band wavefunction around the Brillouin zone. The previously described division of the Hall conductance into classical and nonclassical contributions is in this way made more rigorous; based on periodicity considerations alone, these terms are identified as the winding numbers associated with (i) the basis states and (ii) the coefficients of these basis states, respectively. In this way a general Diophantine equation, independent of the periodic potential, is obtained. Finally, the use of the 'parallel transport' of state vectors in the determination of an overall phase convention for these states is described. This is seen to lead to a simple and straightforward method for determining the Hall conductance. This method is based on the states directly, without reference to the particular component wavefunctions of these states. Mention is made of the generality of calculations of this type, within the context of the geometric (or Berry) phases acquired by systems under an adiabatic modification of their environment.

  20. Vector adaptive predictive coder for speech and audio

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey (Inventor); Gersho, Allen (Inventor)

    1990-01-01

    A real-time vector adaptive predictive coder which approximates each vector of K speech samples by using each of M fixed vectors in a first codebook to excite a time-varying synthesis filter and picking the vector that minimizes distortion. Predictive analysis for each frame determines parameters used for computing from vectors in the first codebook zero-state response vectors that are stored at the same address (index) in a second codebook. Encoding of input speech vectors s.sub.n is then carried out using the second codebook. When the vector that minimizes distortion is found, its index is transmitted to a decoder which has a codebook identical to the first codebook of the decoder. There the index is used to read out a vector that is used to synthesize an output speech vector s.sub.n. The parameters used in the encoder are quantized, for example by using a table, and the indices are transmitted to the decoder where they are decoded to specify transfer characteristics of filters used in producing the vector s.sub.n from the receiver codebook vector selected by the vector index transmitted.

  1. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serwer, Philip, E-mail: serwer@uthscsa.edu; Wright, Elena T.; Liu, Zheng

    DNA packaging of phages phi29, T3 and T7 sometimes produces incompletely packaged DNA with quantized lengths, based on gel electrophoretic band formation. We discover here a packaging ATPase-free, in vitro model for packaged DNA length quantization. We use directed evolution to isolate a five-site T3 point mutant that hyper-produces tail-free capsids with mature DNA (heads). Three tail gene mutations, but no head gene mutations, are present. A variable-length DNA segment leaks from some mutant heads, based on DNase I-protection assay and electron microscopy. The protected DNA segment has quantized lengths, based on restriction endonuclease analysis: six sharp bands of DNAmore » missing 3.7–12.3% of the last end packaged. Native gel electrophoresis confirms quantized DNA expulsion and, after removal of external DNA, provides evidence that capsid radius is the quantization-ruler. Capsid-based DNA length quantization possibly evolved via selection for stalling that provides time for feedback control during DNA packaging and injection. - Graphical abstract: Highlights: • We implement directed evolution- and DNA-sequencing-based phage assembly genetics. • We purify stable, mutant phage heads with a partially leaked mature DNA molecule. • Native gels and DNase-protection show leaked DNA segments to have quantized lengths. • Native gels after DNase I-removal of leaked DNA reveal the capsids to vary in radius. • Thus, we hypothesize leaked DNA quantization via variably quantized capsid radius.« less

  3. Multipath search coding of stationary signals with applications to speech

    NASA Astrophysics Data System (ADS)

    Fehn, H. G.; Noll, P.

    1982-04-01

    This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.

  4. Vacuum polarization of the quantized massive fields in Friedman-Robertson-Walker spacetime

    NASA Astrophysics Data System (ADS)

    Matyjasek, Jerzy; Sadurski, Paweł; Telecka, Małgorzata

    2014-04-01

    The stress-energy tensor of the quantized massive fields in a spatially open, flat, and closed Friedman-Robertson-Walker universe is constructed using the adiabatic regularization (for the scalar field) and the Schwinger-DeWitt approach (for the scalar, spinor, and vector fields). It is shown that the stress-energy tensor calculated in the sixth adiabatic order coincides with the result obtained from the regularized effective action, constructed from the heat kernel coefficient a3. The behavior of the tensor is examined in the power-law cosmological models, and the semiclassical Einstein field equations are solved exactly in a few physically interesting cases, such as the generalized Starobinsky models.

  5. Quantization selection in the high-throughput H.264/AVC encoder based on the RD

    NASA Astrophysics Data System (ADS)

    Pastuszak, Grzegorz

    2013-10-01

    In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.

  6. Visual communications and image processing '92; Proceedings of the Meeting, Boston, MA, Nov. 18-20, 1992

    NASA Astrophysics Data System (ADS)

    Maragos, Petros

    The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)

  7. Comparison of Radio Frequency Distinct Native Attribute and Matched Filtering Techniques for Device Discrimination and Operation Identification

    DTIC Science & Technology

    identification. URE from ten MSP430F5529 16-bit microcontrollers were analyzed using: 1) RF distinct native attributes (RF-DNA) fingerprints paired with multiple...discriminant analysis/maximum likelihood (MDA/ML) classification, 2) RF-DNA fingerprints paired with generalized relevance learning vector quantized

  8. Image segmentation using fuzzy LVQ clustering networks

    NASA Technical Reports Server (NTRS)

    Tsao, Eric Chen-Kuo; Bezdek, James C.; Pal, Nikhil R.

    1992-01-01

    In this note we formulate image segmentation as a clustering problem. Feature vectors extracted from a raw image are clustered into subregions, thereby segmenting the image. A fuzzy generalization of a Kohonen learning vector quantization (LVQ) which integrates the Fuzzy c-Means (FCM) model with the learning rate and updating strategies of the LVQ is used for this task. This network, which segments images in an unsupervised manner, is thus related to the FCM optimization problem. Numerical examples on photographic and magnetic resonance images are given to illustrate this approach to image segmentation.

  9. Face verification system for Android mobile devices using histogram based features

    NASA Astrophysics Data System (ADS)

    Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu

    2016-07-01

    This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.

  10. Full Spectrum Conversion Using Traveling Pulse Wave Quantization

    DTIC Science & Technology

    2017-03-01

    Full Spectrum Conversion Using Traveling Pulse Wave Quantization Michael S. Kappes Mikko E. Waltari IQ-Analog Corporation San Diego, California...temporal-domain quantization technique called Traveling Pulse Wave Quantization (TPWQ). Full spectrum conversion is defined as the complete...pulse width measurements that are continuously generated hence the name “traveling” pulse wave quantization. Our TPWQ-based ADC is composed of a

  11. Supporting Dynamic Quantization for High-Dimensional Data Analytics.

    PubMed

    Guzun, Gheorghi; Canahuate, Guadalupe

    2017-05-01

    Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.

  12. Spin dynamics of paramagnetic centers with anisotropic g tensor and spin of ½

    PubMed Central

    Maryasov, Alexander G.

    2012-01-01

    The influence of g tensor anisotropy on spin dynamics of paramagnetic centers having real or effective spin of 1/2 is studied. The g anisotropy affects both the excitation and the detection of EPR signals, producing noticeable differences between conventional continuous-wave (cw) EPR and pulsed EPR spectra. The magnitudes and directions of the spin and magnetic moment vectors are generally not proportional to each other, but are related to each other through the g tensor. The equilibrium magnetic moment direction is generally parallel to neither the magnetic field nor the spin quantization axis due to the g anisotropy. After excitation with short microwave pulses, the spin vector precesses around its quantization axis, in a plane that is generally not perpendicular to the applied magnetic field. Paradoxically, the magnetic moment vector precesses around its equilibrium direction in a plane exactly perpendicular to the external magnetic field. In the general case, the oscillating part of the magnetic moment is elliptically polarized and the direction of precession is determined by the sign of the g tensor determinant (g tensor signature). Conventional pulsed and cw EPR spectrometers do not allow determination of the g tensor signature or the ellipticity of the magnetic moment trajectory. It is generally impossible to set a uniform spin turning angle for simple pulses in an unoriented or ‘powder’ sample when g tensor anisotropy is significant. PMID:22743542

  13. Spin dynamics of paramagnetic centers with anisotropic g tensor and spin of 1/2

    NASA Astrophysics Data System (ADS)

    Maryasov, Alexander G.; Bowman, Michael K.

    2012-08-01

    The influence of g tensor anisotropy on spin dynamics of paramagnetic centers having real or effective spin of 1/2 is studied. The g anisotropy affects both the excitation and the detection of EPR signals, producing noticeable differences between conventional continuous-wave (cw) EPR and pulsed EPR spectra. The magnitudes and directions of the spin and magnetic moment vectors are generally not proportional to each other, but are related to each other through the g tensor. The equilibrium magnetic moment direction is generally parallel to neither the magnetic field nor the spin quantization axis due to the g anisotropy. After excitation with short microwave pulses, the spin vector precesses around its quantization axis, in a plane that is generally not perpendicular to the applied magnetic field. Paradoxically, the magnetic moment vector precesses around its equilibrium direction in a plane exactly perpendicular to the external magnetic field. In the general case, the oscillating part of the magnetic moment is elliptically polarized and the direction of precession is determined by the sign of the g tensor determinant (g tensor signature). Conventional pulsed and cw EPR spectrometers do not allow determination of the g tensor signature or the ellipticity of the magnetic moment trajectory. It is generally impossible to set a uniform spin turning angle for simple pulses in an unoriented or 'powder' sample when g tensor anisotropy is significant.

  14. Optimal Quantization Scheme for Data-Efficient Target Tracking via UWSNs Using Quantized Measurements.

    PubMed

    Zhang, Senlin; Chen, Huayan; Liu, Meiqin; Zhang, Qunfei

    2017-11-07

    Target tracking is one of the broad applications of underwater wireless sensor networks (UWSNs). However, as a result of the temporal and spatial variability of acoustic channels, underwater acoustic communications suffer from an extremely limited bandwidth. In order to reduce network congestion, it is important to shorten the length of the data transmitted from local sensors to the fusion center by quantization. Although quantization can reduce bandwidth cost, it also brings about bad tracking performance as a result of information loss after quantization. To solve this problem, this paper proposes an optimal quantization-based target tracking scheme. It improves the tracking performance of low-bit quantized measurements by minimizing the additional covariance caused by quantization. The simulation demonstrates that our scheme performs much better than the conventional uniform quantization-based target tracking scheme and the increment of the data length affects our scheme only a little. Its tracking performance improves by only 4.4% from 2- to 3-bit, which means our scheme weakly depends on the number of data bits. Moreover, our scheme also weakly depends on the number of participate sensors, and it can work well in sparse sensor networks. In a 6 × 6 × 6 sensor network, compared with 4 × 4 × 4 sensor networks, the number of participant sensors increases by 334.92%, while the tracking accuracy using 1-bit quantized measurements improves by only 50.77%. Overall, our optimal quantization-based target tracking scheme can achieve the pursuit of data-efficiency, which fits the requirements of low-bandwidth UWSNs.

  15. Adaptive filtering with the self-organizing map: a performance comparison.

    PubMed

    Barreto, Guilherme A; Souza, Luís Gustavo M

    2006-01-01

    In this paper we provide an in-depth evaluation of the SOM as a feasible tool for nonlinear adaptive filtering. A comprehensive survey of existing SOM-based and related architectures for learning input-output mappings is carried out and the application of these architectures to nonlinear adaptive filtering is formulated. Then, we introduce two simple procedures for building RBF-based nonlinear filters using the Vector-Quantized Temporal Associative Memory (VQTAM), a recently proposed method for learning dynamical input-output mappings using the SOM. The aforementioned SOM-based adaptive filters are compared with standard FIR/LMS and FIR/LMS-Newton linear transversal filters, as well as with powerful MLP-based filters in nonlinear channel equalization and inverse modeling tasks. The obtained results in both tasks indicate that SOM-based filters can consistently outperform powerful MLP-based ones.

  16. Quantization and superselection sectors III: Multiply connected spaces and indistinguishable particles

    NASA Astrophysics Data System (ADS)

    Landsman, N. P. Klaas

    2016-09-01

    We reconsider the (non-relativistic) quantum theory of indistinguishable particles on the basis of Rieffel’s notion of C∗-algebraic (“strict”) deformation quantization. Using this formalism, we relate the operator approach of Messiah and Greenberg (1964) to the configuration space approach pioneered by Souriau (1967), Laidlaw and DeWitt-Morette (1971), Leinaas and Myrheim (1977), and others. In dimension d > 2, the former yields bosons, fermions, and paraparticles, whereas the latter seems to leave room for bosons and fermions only, apparently contradicting the operator approach as far as the admissibility of parastatistics is concerned. To resolve this, we first prove that in d > 2 the topologically non-trivial configuration spaces of the second approach are quantized by the algebras of observables of the first. Secondly, we show that the irreducible representations of the latter may be realized by vector bundle constructions, among which the line bundles recover the results of the second approach. Mathematically speaking, representations on higher-dimensional bundles (which define parastatistics) cannot be excluded, which render the configuration space approach incomplete. Physically, however, we show that the corresponding particle states may always be realized in terms of bosons and/or fermions with an unobserved internal degree of freedom (although based on non-relativistic quantum mechanics, this conclusion is analogous to the rigorous results of the Doplicher-Haag-Roberts analysis in algebraic quantum field theory, as well as to the heuristic arguments which led Gell-Mann and others to QCD (i.e. Quantum Chromodynamics)).

  17. Perceptual distortion analysis of color image VQ-based coding

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine

    1997-04-01

    It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.

  18. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE PAGES

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael; ...

    2016-10-14

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  19. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. This algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  20. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  1. Quaternionic Kähler Detour Complexes and {mathcal{N} = 2} Supersymmetric Black Holes

    NASA Astrophysics Data System (ADS)

    Cherney, D.; Latini, E.; Waldron, A.

    2011-03-01

    We study a class of supersymmetric spinning particle models derived from the radial quantization of stationary, spherically symmetric black holes of four dimensional {{mathcal N} = 2} supergravities. By virtue of the c-map, these spinning particles move in quaternionic Kähler manifolds. Their spinning degrees of freedom describe mini-superspace-reduced supergravity fermions. We quantize these models using BRST detour complex technology. The construction of a nilpotent BRST charge is achieved by using local (worldline) supersymmetry ghosts to generate special holonomy transformations. (An interesting byproduct of the construction is a novel Dirac operator on the superghost extended Hilbert space.) The resulting quantized models are gauge invariant field theories with fields equaling sections of special quaternionic vector bundles. They underly and generalize the quaternionic version of Dolbeault cohomology discovered by Baston. In fact, Baston’s complex is related to the BPS sector of the models we write down. Our results rely on a calculus of operators on quaternionic Kähler manifolds that follows from BRST machinery, and although directly motivated by black hole physics, can be broadly applied to any model relying on quaternionic geometry.

  2. Detection of potential mosquito breeding sites based on community sourced geotagged images

    NASA Astrophysics Data System (ADS)

    Agarwal, Ankit; Chaudhuri, Usashi; Chaudhuri, Subhasis; Seetharaman, Guna

    2014-06-01

    Various initiatives have been taken all over the world to involve the citizens in the collection and reporting of data to make better and informed data-driven decisions. Our work shows how the geotagged images collected through the general population can be used to combat Malaria and Dengue by identifying and visualizing localities that contain potential mosquito breeding sites. Our method first employs image quality assessment on the client side to reject the images with distortions like blur and artifacts. Each geotagged image received on the server is converted into a feature vector using the bag of visual words model. We train an SVM classifier on a histogram-based feature vector obtained after the vector quantization of SIFT features to discriminate images containing either a small stagnant water body like puddle, or open containers and tires, bushes etc. from those that contain flowing water, manicured lawns, tires attached to a vehicle etc. A geographical heat map is generated by assigning a specific location a probability value of it being a potential mosquito breeding ground of mosquito using feature level fusion or the max approach presented in the paper. The heat map thus generated can be used by concerned health authorities to take appropriate action and to promote civic awareness.

  3. LVQ and backpropagation neural networks applied to NASA SSME data

    NASA Technical Reports Server (NTRS)

    Doniere, Timothy F.; Dhawan, Atam P.

    1993-01-01

    Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.

  4. Master equation for open two-band systems and its applications to Hall conductance

    NASA Astrophysics Data System (ADS)

    Shen, H. Z.; Zhang, S. S.; Dai, C. M.; Yi, X. X.

    2018-02-01

    Hall conductivity in the presence of a dephasing environment has recently been investigated with a dissipative term introduced phenomenologically. In this paper, we study the dissipative topological insulator (TI) and its topological transition in the presence of quantized electromagnetic environments. A Lindblad-type equation is derived to determine the dynamics of a two-band system. When the two-band model describes TIs, the environment may be the fluctuations of radiation that surround the TIs. We find the dependence of decay rates in the master equation on Bloch vectors in the two-band system, which leads to a mixing of the band occupations. Hence the environment-induced current is in general not perfectly topological in the presence of coupling to the environment, although deviations are small in the weak limit. As an illustration, we apply the Bloch-vector-dependent master equation to TIs and calculate the Hall conductance of tight-binding electrons in a two-dimensional lattice. The influence of environments on the Hall conductance is presented and discussed. The calculations show that the phase transition points of the TIs are robust against the quantized electromagnetic environment. The results might bridge the gap between quantum optics and topological photonic materials.

  5. Musical sound analysis/synthesis using vector-quantized time-varying spectra

    NASA Astrophysics Data System (ADS)

    Ehmann, Andreas F.; Beauchamp, James W.

    2002-11-01

    A fundamental goal of computer music sound synthesis is accurate, yet efficient resynthesis of musical sounds, with the possibility of extending the synthesis into new territories using control of perceptually intuitive parameters. A data clustering technique known as vector quantization (VQ) is used to extract a globally optimum set of representative spectra from phase vocoder analyses of instrument tones. This set of spectra, called a Codebook, is used for sinusoidal additive synthesis or, more efficiently, for wavetable synthesis. Instantaneous spectra are synthesized by first determining the Codebook indices corresponding to the best least-squares matches to the original time-varying spectrum. Spectral index versus time functions are then smoothed, and interpolation is employed to provide smooth transitions between Codebook spectra. Furthermore, spectral frames are pre-flattened and their slope, or tilt, extracted before clustering is applied. This allows spectral tilt, closely related to the perceptual parameter ''brightness,'' to be independently controlled during synthesis. The result is a highly compressed format consisting of the Codebook spectra and time-varying tilt, amplitude, and Codebook index parameters. This technique has been applied to a variety of harmonic musical instrument sounds with the resulting resynthesized tones providing good matches to the originals.

  6. Electron-electron interaction and spin-orbit coupling in InAs/AlSb heterostructures with a two-dimensional electron gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavrilenko, V. I.; Krishtopenko, S. S., E-mail: ds_a-teens@mail.ru; Goiran, M.

    2011-01-15

    The effect of electron-electron interaction on the spectrum of two-dimensional electron states in InAs/AlSb (001) heterostructures with a GaSb cap layer with one filled size-quantization subband. The energy spectrum of two-dimensional electrons is calculated in the Hartree and Hartree-Fock approximations. It is shown that the exchange interaction decreasing the electron energy in subbands increases the energy gap between subbands and the spin-orbit splitting of the spectrum in the entire region of electron concentrations, at which only the lower size-quantization band is filled. The nonlinear dependence of the Rashba splitting constant at the Fermi wave vector on the concentration of two-dimensionalmore » electrons is demonstrated.« less

  7. Feature Vector Construction Method for IRIS Recognition

    NASA Astrophysics Data System (ADS)

    Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.

    2017-05-01

    One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.

  8. Face recognition via sparse representation of SIFT feature on hexagonal-sampling image

    NASA Astrophysics Data System (ADS)

    Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong

    2018-04-01

    This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.

  9. Optimized universal color palette design for error diffusion

    NASA Astrophysics Data System (ADS)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  10. Development of a good-quality speech coder for transmission over noisy channels at 2.4 kb/s

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Berouti, M.; Higgins, A.; Russell, W.

    1982-03-01

    This report describes the development, study, and experimental results of a 2.4 kb/s speech coder called harmonic deviations (HDV) vocoder, which transmits good-quality speech over noisy channels with bit-error rates of up to 1%. The HDV coder is based on the linear predictive coding (LPC) vocoder, and it transmits additional information over and above the data transmitted by the LPC vocoder, in the form of deviations between the speech spectrum and the LPC all-pole model spectrum at a selected set of frequencies. At the receiver, the spectral deviations are used to generate the excitation signal for the all-pole synthesis filter. The report describes and compares several methods for extracting the spectral deviations from the speech signal and for encoding them. To limit the bit-rate of the HDV coder to 2.4 kb/s the report discusses several methods including orthogonal transformation and minimum-mean-square-error scalar quantization of log area ratios, two-stage vector-scalar quantization, and variable frame rate transmission. The report also presents the results of speech-quality optimization of the HDV coder at 2.4 kb/s.

  11. Quantized Rabi oscillations and circular dichroism in quantum Hall systems

    NASA Astrophysics Data System (ADS)

    Tran, D. T.; Cooper, N. R.; Goldman, N.

    2018-06-01

    The dissipative response of a quantum system upon periodic driving can be exploited as a probe of its topological properties. Here we explore the implications of such phenomena in two-dimensional gases subjected to a uniform magnetic field. It is shown that a filled Landau level exhibits a quantized circular dichroism, which can be traced back to its underlying nontrivial topology. Based on selection rules, we find that this quantized effect can be suitably described in terms of Rabi oscillations, whose frequencies satisfy simple quantization laws. We discuss how quantized dissipative responses can be probed locally, both in the bulk and at the boundaries of the system. This work suggests alternative forms of topological probes based on circular dichroism.

  12. The Coulomb problem on a 3-sphere and Heun polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellucci, Stefano; Yeghikyan, Vahagn; Yerevan State University, Alex-Manoogian st. 1, 00025 Yerevan

    2013-08-15

    The paper studies the quantum mechanical Coulomb problem on a 3-sphere. We present a special parametrization of the ellipto-spheroidal coordinate system suitable for the separation of variables. After quantization we get the explicit form of the spectrum and present an algebraic equation for the eigenvalues of the Runge-Lentz vector. We also present the wave functions expressed via Heun polynomials.

  13. Deformation quantization with separation of variables of an endomorphism bundle

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander

    2014-01-01

    Given a holomorphic Hermitian vector bundle E and a star-product with separation of variables on a pseudo-Kähler manifold, we construct a star product on the sections of the endomorphism bundle of the dual bundle E∗ which also has the appropriately generalized property of separation of variables. For this star product we prove a generalization of Gammelgaard's graph-theoretic formula.

  14. Toward semantic-based retrieval of visual information: a model-based approach

    NASA Astrophysics Data System (ADS)

    Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman

    2002-07-01

    This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.

  15. Detection of laryngeal function using speech and electroglottographic data.

    PubMed

    Childers, D G; Bae, K S

    1992-01-01

    The purpose of this research was to develop quantitative measures for the assessment of laryngeal function using speech and electroglottographic (EGG) data. We developed two procedures for the detection of laryngeal pathology: 1) a spectral distortion measure using pitch synchronous and asynchronous methods with linear predictive coding (LPC) vectors and vector quantization (VQ) and 2) analysis of the EGG signal using time interval and amplitude difference measures. The VQ procedure was conjectured to offer the possibility of circumventing the need to estimate the glottal volume velocity wave-form by inverse filtering techniques. The EGG procedure was to evaluate data that was "nearly" a direct measure of vocal fold vibratory motion and thus was conjectured to offer the potential for providing an excellent assessment of laryngeal function. A threshold based procedure gave 75.9 and 69.0% probability of pathological detection using procedures 1) and 2), respectively, for 29 patients with pathological voices and 52 normal subjects. The false alarm probability was 9.6% for the normal subjects.

  16. Condition monitoring of 3G cellular networks through competitive neural models.

    PubMed

    Barreto, Guilherme A; Mota, João C M; Souza, Luis G M; Frota, Rewbenio A; Aguayo, Leonardo

    2005-09-01

    We develop an unsupervised approach to condition monitoring of cellular networks using competitive neural algorithms. Training is carried out with state vectors representing the normal functioning of a simulated CDMA2000 network. Once training is completed, global and local normality profiles (NPs) are built from the distribution of quantization errors of the training state vectors and their components, respectively. The global NP is used to evaluate the overall condition of the cellular system. If abnormal behavior is detected, local NPs are used in a component-wise fashion to find abnormal state variables. Anomaly detection tests are performed via percentile-based confidence intervals computed over the global and local NPs. We compared the performance of four competitive algorithms [winner-take-all (WTA), frequency-sensitive competitive learning (FSCL), self-organizing map (SOM), and neural-gas algorithm (NGA)] and the results suggest that the joint use of global and local NPs is more efficient and more robust than current single-threshold methods.

  17. Generic absence of strong singularities in loop quantum Bianchi-IX spacetimes

    NASA Astrophysics Data System (ADS)

    Saini, Sahil; Singh, Parampreet

    2018-03-01

    We study the generic resolution of strong singularities in loop quantized effective Bianchi-IX spacetime in two different quantizations—the connection operator based ‘A’ quantization and the extrinsic curvature based ‘K’ quantization. We show that in the effective spacetime description with arbitrary matter content, it is necessary to include inverse triad corrections to resolve all the strong singularities in the ‘A’ quantization. Whereas in the ‘K’ quantization these results can be obtained without including inverse triad corrections. Under these conditions, the energy density, expansion and shear scalars for both of the quantization prescriptions are bounded. Notably, both the quantizations can result in potentially curvature divergent events if matter content allows divergences in the partial derivatives of the energy density with respect to the triad variables at a finite energy density. Such events are found to be weak curvature singularities beyond which geodesics can be extended in the effective spacetime. Our results show that all potential strong curvature singularities of the classical theory are forbidden in Bianchi-IX spacetime in loop quantum cosmology and geodesic evolution never breaks down for such events.

  18. Information extraction from multivariate images

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Kegley, K. A.; Schiess, J. R.

    1986-01-01

    An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.

  19. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  20. Scalable hybrid computation with spikes.

    PubMed

    Sarpeshkar, Rahul; O'Halloran, Micah

    2002-09-01

    We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. And, third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. And, third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be used to perform learning, vector quantization, spike pattern recognition and generation, and how they may be reconfigured.

  1. Measuring and Modeling Shared Visual Attention

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.; Gontar, Patrick

    2016-01-01

    Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions. We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of "activities," each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates. We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar & Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.

  2. Measuring and Modeling Shared Visual Attention

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    2016-01-01

    Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions.We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of activities, each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates.We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.

  3. Understanding Local Structure Globally in Earth Science Remote Sensing Data Sets

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Fetzer, Eric

    2007-01-01

    Empirical probability distributions derived from the data are the signatures of physical processes generating the data. Distributions defined on different space-time windows can be compared and differences or changes can be attributed to physical processes. This presentation discusses on ways to reduce remote sensing data in a way that preserves information, focusing on the rate-distortion theory and using the entropy-constrained vector quantization algorithm.

  4. Image Classification of Ribbed Smoked Sheet using Learning Vector Quantization

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Pulungan, A. F.; Faza, S.; Budiarto, R.

    2017-01-01

    Natural rubber is an important export commodity in Indonesia, which can be a major contributor to national economic development. One type of rubber used as rubber material exports is Ribbed Smoked Sheet (RSS). The quantity of RSS exports depends on the quality of RSS. RSS rubber quality has been assigned in SNI 06-001-1987 and the International Standards of Quality and Packing for Natural Rubber Grades (The Green Book). The determination of RSS quality is also known as the sorting process. In the rubber factones, the sorting process is still done manually by looking and detecting at the levels of air bubbles on the surface of the rubber sheet by naked eyes so that the result is subjective and not so good. Therefore, a method is required to classify RSS rubber automatically and precisely. We propose some image processing techniques for the pre-processing, zoning method for feature extraction and Learning Vector Quantization (LVQ) method for classifying RSS rubber into two grades, namely RSS1 and RSS3. We used 120 RSS images as training dataset and 60 RSS images as testing dataset. The result shows that our proposed method can give 89% of accuracy and the best perform epoch is in the fifteenth epoch.

  5. Educational Information Quantization for Improving Content Quality in Learning Management Systems

    ERIC Educational Resources Information Center

    Rybanov, Alexander Aleksandrovich

    2014-01-01

    The article offers the educational information quantization method for improving content quality in Learning Management Systems. The paper considers questions concerning analysis of quality of quantized presentation of educational information, based on quantitative text parameters: average frequencies of parts of speech, used in the text; formal…

  6. Wigner functions on non-standard symplectic vector spaces

    NASA Astrophysics Data System (ADS)

    Dias, Nuno Costa; Prata, João Nuno

    2018-01-01

    We consider the Weyl quantization on a flat non-standard symplectic vector space. We focus mainly on the properties of the Wigner functions defined therein. In particular we show that the sets of Wigner functions on distinct symplectic spaces are different but have non-empty intersections. This extends previous results to arbitrary dimension and arbitrary (constant) symplectic structure. As a by-product we introduce and prove several concepts and results on non-standard symplectic spaces which generalize those on the standard symplectic space, namely, the symplectic spectrum, Williamson's theorem, and Narcowich-Wigner spectra. We also show how Wigner functions on non-standard symplectic spaces behave under the action of an arbitrary linear coordinate transformation.

  7. The BRST complex of homological Poisson reduction

    NASA Astrophysics Data System (ADS)

    Müller-Lennert, Martin

    2017-02-01

    BRST complexes are differential graded Poisson algebras. They are associated with a coisotropic ideal J of a Poisson algebra P and provide a description of the Poisson algebra (P/J)^J as their cohomology in degree zero. Using the notion of stable equivalence introduced in Felder and Kazhdan (Contemporary Mathematics 610, Perspectives in representation theory, 2014), we prove that any two BRST complexes associated with the same coisotropic ideal are quasi-isomorphic in the case P = R[V] where V is a finite-dimensional symplectic vector space and the bracket on P is induced by the symplectic structure on V. As a corollary, the cohomology of the BRST complexes is canonically associated with the coisotropic ideal J in the symplectic case. We do not require any regularity assumptions on the constraints generating the ideal J. We finally quantize the BRST complex rigorously in the presence of infinitely many ghost variables and discuss the uniqueness of the quantization procedure.

  8. Observation of Landau levels in potassium-intercalated graphite under a zero magnetic field

    PubMed Central

    Guo, Donghui; Kondo, Takahiro; Machida, Takahiro; Iwatake, Keigo; Okada, Susumu; Nakamura, Junji

    2012-01-01

    The charge carriers in graphene are massless Dirac fermions and exhibit a relativistic Landau-level quantization in a magnetic field. Recently, it has been reported that, without any external magnetic field, quantized energy levels have been also observed from strained graphene nanobubbles on a platinum surface, which were attributed to the Landau levels of massless Dirac fermions in graphene formed by a strain-induced pseudomagnetic field. Here we show the generation of the Landau levels of massless Dirac fermions on a partially potassium-intercalated graphite surface without applying external magnetic field. Landau levels of massless Dirac fermions indicate the graphene character in partially potassium-intercalated graphite. The generation of the Landau levels is ascribed to a vector potential induced by the perturbation of nearest-neighbour hopping, which may originate from a strain or a gradient of on-site potentials at the perimeters of potassium-free domains. PMID:22990864

  9. Diffractive charmonium spectrum in high energy collisions in the basis light-front quantization approach

    DOE PAGES

    Chen, Guangyao; Li, Yang; Maris, Pieter; ...

    2017-04-14

    Using the charmonium light-front wavefunctions obtained by diagonalizing an effective Hamiltonian with the one-gluon exchange interaction and a confining potential inspired by light-front holography in the basis light-front quantization formalism, we compute production of charmonium states in diffractive deep inelastic scattering and ultra-peripheral heavy ion collisions within the dipole picture. Our method allows us to predict yields of all vector charmonium states below the open flavor thresholds in high-energy deep inelastic scattering, proton-nucleus and ultra-peripheral heavy ion collisions, without introducing any new parameters in the light-front wavefunctions. The obtained charmonium cross section is in reasonable agreement with experimental data atmore » HERA, RHIC and LHC. We observe that the cross-section ratio σΨ(2s)/σJ/Ψ reveals significant independence of model parameters« less

  10. Rate and power efficient image compressed sensing and transmission

    NASA Astrophysics Data System (ADS)

    Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan

    2016-01-01

    This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.

  11. Neurocomputing strategies in decomposition based structural design

    NASA Technical Reports Server (NTRS)

    Szewczyk, Z.; Hajela, P.

    1993-01-01

    The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.

  12. New adaptive color quantization method based on self-organizing maps.

    PubMed

    Chang, Chip-Hong; Xu, Pengfei; Xiao, Rui; Srikanthan, Thambipillai

    2005-01-01

    Color quantization (CQ) is an image processing task popularly used to convert true color images to palletized images for limited color display devices. To minimize the contouring artifacts introduced by the reduction of colors, a new competitive learning (CL) based scheme called the frequency sensitive self-organizing maps (FS-SOMs) is proposed to optimize the color palette design for CQ. FS-SOM harmonically blends the neighborhood adaptation of the well-known self-organizing maps (SOMs) with the neuron dependent frequency sensitive learning model, the global butterfly permutation sequence for input randomization, and the reinitialization of dead neurons to harness effective utilization of neurons. The net effect is an improvement in adaptation, a well-ordered color palette, and the alleviation of underutilization problem, which is the main cause of visually perceivable artifacts of CQ. Extensive simulations have been performed to analyze and compare the learning behavior and performance of FS-SOM against other vector quantization (VQ) algorithms. The results show that the proposed FS-SOM outperforms classical CL, Linde, Buzo, and Gray (LBG), and SOM algorithms. More importantly, FS-SOM achieves its superiority in reconstruction quality and topological ordering with a much greater robustness against variations in network parameters than the current art SOM algorithm for CQ. A most significant bit (MSB) biased encoding scheme is also introduced to reduce the number of parallel processing units. By mapping the pixel values as sign-magnitude numbers and biasing the magnitudes according to their sign bits, eight lattice points in the color space are condensed into one common point density function. Consequently, the same processing element can be used to map several color clusters and the entire FS-SOM network can be substantially scaled down without severely scarifying the quality of the displayed image. The drawback of this encoding scheme is the additional storage overhead, which can be cut down by leveraging on existing encoder in an overall lossy compression scheme.

  13. Quantization Distortion in Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  14. Luminance-model-based DCT quantization for color image compression

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1992-01-01

    A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).

  15. Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.

    PubMed

    Guo Lu; Xiaoyun Zhang; Li Chen; Zhiyong Gao

    2018-02-01

    Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.

  16. Quantized impedance dealing with the damping behavior of the one-dimensional oscillator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Jinghao; Zhang, Jing; Li, Yuan

    2015-11-15

    A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is themore » mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.« less

  17. Quantized impedance dealing with the damping behavior of the one-dimensional oscillator

    NASA Astrophysics Data System (ADS)

    Zhu, Jinghao; Zhang, Jing; Li, Yuan; Zhang, Yong; Fang, Zhengji; Zhao, Peide; Li, Erping

    2015-11-01

    A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is the mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.

  18. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  19. An Intelligent System for Monitoring the Microgravity Environment Quality On-Board the International Space Station

    NASA Technical Reports Server (NTRS)

    Lin, Paul P.; Jules, Kenol

    2002-01-01

    An intelligent system for monitoring the microgravity environment quality on-board the International Space Station is presented. The monitoring system uses a new approach combining Kohonen's self-organizing feature map, learning vector quantization, and back propagation neural network to recognize and classify the known and unknown patterns. Finally, fuzzy logic is used to assess the level of confidence associated with each vibrating source activation detected by the system.

  20. Course 4: Anyons

    NASA Astrophysics Data System (ADS)

    Myrheim, J.

    Contents 1 Introduction 1.1 The concept of particle statistics 1.2 Statistical mechanics and the many-body problem 1.3 Experimental physics in two dimensions 1.4 The algebraic approach: Heisenberg quantization 1.5 More general quantizations 2 The configuration space 2.1 The Euclidean relative space for two particles 2.2 Dimensions d=1,2,3 2.3 Homotopy 2.4 The braid group 3 Schroedinger quantization in one dimension 4 Heisenberg quantization in one dimension 4.1 The coordinate representation 5 Schroedinger quantization in dimension d ≥ 2 5.1 Scalar wave functions 5.2 Homotopy 5.3 Interchange phases 5.4 The statistics vector potential 5.5 The N-particle case 5.6 Chern-Simons theory 6 The Feynman path integral for anyons 6.1 Eigenstates for position and momentum 6.2 The path integral 6.3 Conjugation classes in SN 6.4 The non-interacting case 6.5 Duality of Feynman and Schroedinger quantization 7 The harmonic oscillator 7.1 The two-dimensional harmonic oscillator 7.2 Two anyons in a harmonic oscillator potential 7.3 More than two anyons 7.4 The three-anyon problem 8 The anyon gas 8.1 The cluster and virial expansions 8.2 First and second order perturbative results 8.3 Regularization by periodic boundary conditions 8.4 Regularization by a harmonic oscillator potential 8.5 Bosons and fermions 8.6 Two anyons 8.7 Three anyons 8.8 The Monte Carlo method 8.9 The path integral representation of the coefficients GP 8.10 Exact and approximate polynomials 8.11 The fourth virial coefficient of anyons 8.12 Two polynomial theorems 9 Charged particles in a constant magnetic field 9.1 One particle in a magnetic field 9.2 Two anyons in a magnetic field 9.3 The anyon gas in a magnetic field 10 Interchange phases and geometric phases 10.1 Introduction to geometric phases 10.2 One particle in a magnetic field 10.3 Two particles in a magnetic field 10.4 Interchange of two anyons in potential wells 10.5 Laughlin's theory of the fractional quantum Hall effect

  1. Automated vector selection of SIVQ and parallel computing integration MATLAB™: Innovations supporting large-scale and high-throughput image analysis studies.

    PubMed

    Cheng, Jerome; Hipp, Jason; Monaco, James; Lucas, David R; Madabhushi, Anant; Balis, Ulysses J

    2011-01-01

    Spatially invariant vector quantization (SIVQ) is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector's sensitivity and specificity properties (typically by reviewing a resultant heat map). In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA) and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC) transfer function, with each assessment resulting in an associated area-under-the-curve (AUC) figure of merit. Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an additional effort directed towards attaining high-throughput capability for the SIVQ algorithm, we demonstrated the successful incorporation of it with the MATrix LABoratory (MATLAB™) application interface. The SIVQ algorithm is suitable for automated vector selection settings and high throughput computation.

  2. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  3. A novel method based on new adaptive LVQ neural network for predicting protein-protein interactions from protein sequences.

    PubMed

    Yousef, Abdulaziz; Moghadam Charkari, Nasrollah

    2013-11-07

    Protein-Protein interaction (PPI) is one of the most important data in understanding the cellular processes. Many interesting methods have been proposed in order to predict PPIs. However, the methods which are based on the sequence of proteins as a prior knowledge are more universal. In this paper, a sequence-based, fast, and adaptive PPI prediction method is introduced to assign two proteins to an interaction class (yes, no). First, in order to improve the presentation of the sequences, twelve physicochemical properties of amino acid have been used by different representation methods to transform the sequence of protein pairs into different feature vectors. Then, for speeding up the learning process and reducing the effect of noise PPI data, principal component analysis (PCA) is carried out as a proper feature extraction algorithm. Finally, a new and adaptive Learning Vector Quantization (LVQ) predictor is designed to deal with different models of datasets that are classified into balanced and imbalanced datasets. The accuracy of 93.88%, 90.03%, and 89.72% has been found on S. cerevisiae, H. pylori, and independent datasets, respectively. The results of various experiments indicate the efficiency and validity of the method. © 2013 Published by Elsevier Ltd.

  4. On families of differential equations on two-torus with all phase-lock areas

    NASA Astrophysics Data System (ADS)

    Glutsyuk, Alexey; Rybnikov, Leonid

    2017-01-01

    We consider two-parametric families of non-autonomous ordinary differential equations on the two-torus with coordinates (x, t) of the type \\overset{\\centerdot}{{x}} =v(x)+A+Bf(t) . We study its rotation number as a function of the parameters (A, B). The phase-lock areas are those level sets of the rotation number function ρ =ρ (A,B) that have non-empty interiors. Buchstaber, Karpov and Tertychnyi studied the case when v(x)=\\sin x in their joint paper. They observed the quantization effect: for every smooth periodic function f(t) the family of equations may have phase-lock areas only for integer rotation numbers. Another proof of this quantization statement was later obtained in a joint paper by Ilyashenko, Filimonov and Ryzhov. This implies a similar quantization effect for every v(x)=a\\sin (mx)+b\\cos (mx)+c and rotation numbers that are multiples of \\frac{1}{m} . We show that for every other analytic vector field v(x) (i.e. having at least two Fourier harmonics with non-zero non-opposite degrees and nonzero coefficients) there exists an analytic periodic function f(t) such that the corresponding family of equations has phase-lock areas for all the rational values of the rotation number.

  5. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  6. Quantization of Motor Activity into Primitives and Time-Frequency Atoms Using Independent Component Analysis and Matching Pursuit Algorithms

    DTIC Science & Technology

    2001-10-25

    form: (1) A is a scaling factor, t is time and r a coordinate vector describing the limb configuration. We...combination of limb state and EMG. In our early examination of EMG we detected underlying groups of muscles and phases of activity by inspection and...representations of EEG or other biological signals has been thoroughly explored. Such components might be used as a basis for neuroprosthetic control

  7. Theory of quantized systems: formal basis for DEVS/HLA distributed simulation environment

    NASA Astrophysics Data System (ADS)

    Zeigler, Bernard P.; Lee, J. S.

    1998-08-01

    In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.

  8. Resolution enhancement of low-quality videos using a high-resolution frame

    NASA Astrophysics Data System (ADS)

    Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.

  9. Automatic assessment of voice quality according to the GRBAS scale.

    PubMed

    Sáenz-Lechón, Nicolás; Godino-Llorente, Juan I; Osma-Ruiz, Víctor; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando

    2006-01-01

    Nowadays, the most extended techniques to measure the voice quality are based on perceptual evaluation by well trained professionals. The GRBAS scale is a widely used method for perceptual evaluation of voice quality. The GRBAS scale is widely used in Japan and there is increasing interest in both Europe and the United States. However, this technique needs well-trained experts, and is based on the evaluator's expertise, depending a lot on his own psycho-physical state. Furthermore, a great variability in the assessments performed from one evaluator to another is observed. Therefore, an objective method to provide such measurement of voice quality would be very valuable. In this paper, the automatic assessment of voice quality is addressed by means of short-term Mel cepstral parameters (MFCC), and learning vector quantization (LVQ) in a pattern recognition stage. Results show that this approach provides acceptable results for this purpose, with accuracy around 65% at the best.

  10. High-resolution quantization based on soliton self-frequency shift and spectral compression in a bi-directional comb-fiber architecture

    NASA Astrophysics Data System (ADS)

    Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong

    2018-03-01

    We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.

  11. Detection of Road Surface States from Tire Noise Using Neural Network Analysis

    NASA Astrophysics Data System (ADS)

    Kongrattanaprasert, Wuttiwat; Nomura, Hideyuki; Kamakura, Tomoo; Ueda, Koji

    This report proposes a new processing method for automatically detecting the states of road surfaces from tire noises of passing vehicles. In addition to multiple indicators of the signal features in the frequency domain, we propose a few feature indicators in the time domain to successfully classify the road states into four categories: snowy, slushy, wet, and dry states. The method is based on artificial neural networks. The proposed classification is carried out in multiple neural networks using learning vector quantization. The outcomes of the networks are then integrated by the voting decision-making scheme. Experimental results obtained from recorded signals for ten days in the snowy season demonstrated that an accuracy of approximately 90% can be attained for predicting road surface states using only tire noise data.

  12. Conductance dips and spin precession in a nonuniform waveguide with spin–orbit coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malyshev, A. I., E-mail: malyshev@phys.unn.ru; Kozulin, A. S.

    An infinite waveguide with a nonuniformity, a segment of finite length with spin–orbit coupling, is considered in the case when the Rashba and Dresselhaus parameters are identical. Analytical expressions have been derived in the single-mode approximation for the conductance of the system for an arbitrary initial spin state. Based on numerical calculations with several size quantization modes, we have detected and described the conductance dips arising when the waves are localized in the nonuniformity due to the formation of an effective potential well in it. We show that allowance for the evanescent modes under carrier spin precession in an effectivemore » magnetic field does not lead to a change in the direction of the average spin vector at the output of the system.« less

  13. Application of two neural network paradigms to the study of voluntary employee turnover.

    PubMed

    Somers, M J

    1999-04-01

    Two neural network paradigms--multilayer perceptron and learning vector quantization--were used to study voluntary employee turnover with a sample of 577 hospital employees. The objectives of the study were twofold. The 1st was to assess whether neural computing techniques offered greater predictive accuracy than did conventional turnover methodologies. The 2nd was to explore whether computer models of turnover based on neural network technologies offered new insights into turnover processes. When compared with logistic regression analysis, both neural network paradigms provided considerably more accurate predictions of turnover behavior, particularly with respect to the correct classification of leavers. In addition, these neural network paradigms captured nonlinear relationships that are relevant for theory development. Results are discussed in terms of their implications for future research.

  14. Development of a brain MRI-based hidden Markov model for dementia recognition.

    PubMed

    Chen, Ying; Pham, Tuan D

    2013-01-01

    Dementia is an age-related cognitive decline which is indicated by an early degeneration of cortical and sub-cortical structures. Characterizing those morphological changes can help to understand the disease development and contribute to disease early prediction and prevention. But modeling that can best capture brain structural variability and can be valid in both disease classification and interpretation is extremely challenging. The current study aimed to establish a computational approach for modeling the magnetic resonance imaging (MRI)-based structural complexity of the brain using the framework of hidden Markov models (HMMs) for dementia recognition. Regularity dimension and semi-variogram were used to extract structural features of the brains, and vector quantization method was applied to convert extracted feature vectors to prototype vectors. The output VQ indices were then utilized to estimate parameters for HMMs. To validate its accuracy and robustness, experiments were carried out on individuals who were characterized as non-demented and mild Alzheimer's diseased. Four HMMs were constructed based on the cohort of non-demented young, middle-aged, elder and demented elder subjects separately. Classification was carried out using a data set including both non-demented and demented individuals with a wide age range. The proposed HMMs have succeeded in recognition of individual who has mild Alzheimer's disease and achieved a better classification accuracy compared to other related works using different classifiers. Results have shown the ability of the proposed modeling for recognition of early dementia. The findings from this research will allow individual classification to support the early diagnosis and prediction of dementia. By using the brain MRI-based HMMs developed in our proposed research, it will be more efficient, robust and can be easily used by clinicians as a computer-aid tool for validating imaging bio-markers for early prediction of dementia.

  15. Reducing weight precision of convolutional neural networks towards large-scale on-chip image recognition

    NASA Astrophysics Data System (ADS)

    Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang

    2015-05-01

    In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.

  16. The coordinate coherent states approach revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, Yan-Gang, E-mail: miaoyg@nankai.edu.cn; Zhang, Shao-Jun, E-mail: sjzhang@mail.nankai.edu.cn

    2013-02-15

    We revisit the coordinate coherent states approach through two different quantization procedures in the quantum field theory on the noncommutative Minkowski plane. The first procedure, which is based on the normal commutation relation between an annihilation and creation operators, deduces that a point mass can be described by a Gaussian function instead of the usual Dirac delta function. However, we argue this specific quantization by adopting the canonical one (based on the canonical commutation relation between a field and its conjugate momentum) and show that a point mass should still be described by the Dirac delta function, which implies thatmore » the concept of point particles is still valid when we deal with the noncommutativity by following the coordinate coherent states approach. In order to investigate the dependence on quantization procedures, we apply the two quantization procedures to the Unruh effect and Hawking radiation and find that they give rise to significantly different results. Under the first quantization procedure, the Unruh temperature and Unruh spectrum are not deformed by noncommutativity, but the Hawking temperature is deformed by noncommutativity while the radiation specturm is untack. However, under the second quantization procedure, the Unruh temperature and Hawking temperature are untack but the both spectra are modified by an effective greybody (deformed) factor. - Highlights: Black-Right-Pointing-Pointer Suggest a canonical quantization in the coordinate coherent states approach. Black-Right-Pointing-Pointer Prove the validity of the concept of point particles. Black-Right-Pointing-Pointer Apply the canonical quantization to the Unruh effect and Hawking radiation. Black-Right-Pointing-Pointer Find no deformations in the Unruh temperature and Hawking temperature. Black-Right-Pointing-Pointer Provide the modified spectra of the Unruh effect and Hawking radiation.« less

  17. Methods of Contemporary Gauge Theory

    NASA Astrophysics Data System (ADS)

    Makeenko, Yuri

    2002-08-01

    Preface; Part I. Path Integrals: 1. Operator calculus; 2. Second quantization; 3. Quantum anomalies from path integral; 4. Instantons in quantum mechanics; Part II. Lattice Gauge Theories: 5. Observables in gauge theories; 6. Gauge fields on a lattice; 7. Lattice methods; 8. Fermions on a lattice; 9. Finite temperatures; Part III. 1/N Expansion: 10. O(N) vector models; 11. Multicolor QCD; 12. QCD in loop space; 13. Matrix models; Part IV. Reduced Models: 14. Eguchi-Kawai model; 15. Twisted reduced models; 16. Non-commutative gauge theories.

  18. Methods of Contemporary Gauge Theory

    NASA Astrophysics Data System (ADS)

    Makeenko, Yuri

    2005-11-01

    Preface; Part I. Path Integrals: 1. Operator calculus; 2. Second quantization; 3. Quantum anomalies from path integral; 4. Instantons in quantum mechanics; Part II. Lattice Gauge Theories: 5. Observables in gauge theories; 6. Gauge fields on a lattice; 7. Lattice methods; 8. Fermions on a lattice; 9. Finite temperatures; Part III. 1/N Expansion: 10. O(N) vector models; 11. Multicolor QCD; 12. QCD in loop space; 13. Matrix models; Part IV. Reduced Models: 14. Eguchi-Kawai model; 15. Twisted reduced models; 16. Non-commutative gauge theories.

  19. Modulated error diffusion CGHs for neural nets

    NASA Astrophysics Data System (ADS)

    Vermeulen, Pieter J. E.; Casasent, David P.

    1990-05-01

    New modulated error diffusion CGHs (computer generated holograms) for optical computing are considered. Specific attention is given to their use in optical matrix-vector, associative processor, neural net and optical interconnection architectures. We consider lensless CGH systems (many CGHs use an external Fourier transform (FT) lens), the Fresnel sampling requirements, the effects of finite CGH apertures (sample and hold inputs), dot size correction (for laser recorders), and new applications for this novel encoding method (that devotes attention to quantization noise effects).

  20. Quantum angular momentum diffusion of rigid bodies

    NASA Astrophysics Data System (ADS)

    Papendell, Birthe; Stickler, Benjamin A.; Hornberger, Klaus

    2017-12-01

    We show how to describe the diffusion of the quantized angular momentum vector of an arbitrarily shaped rigid rotor as induced by its collisional interaction with an environment. We present the general form of the Lindblad-type master equation and relate it to the orientational decoherence of an asymmetric nanoparticle in the limit of small anisotropies. The corresponding diffusion coefficients are derived for gas particles scattering off large molecules and for ambient photons scattering off dielectric particles, using the elastic scattering amplitudes.

  1. Direct comparison of fractional and integer quantized Hall resistance

    NASA Astrophysics Data System (ADS)

    Ahlers, Franz J.; Götz, Martin; Pierz, Klaus

    2017-08-01

    We present precision measurements of the fractional quantized Hall effect, where the quantized resistance {{R}≤ft[ 1/3 \\right]} in the fractional quantum Hall state at filling factor 1/3 was compared with a quantized resistance {{R}[2]} , represented by an integer quantum Hall state at filling factor 2. A cryogenic current comparator bridge capable of currents down to the nanoampere range was used to directly compare two resistance values of two GaAs-based devices located in two cryostats. A value of 1-(5.3  ±  6.3) 10-8 (95% confidence level) was obtained for the ratio ({{R}≤ft[ 1/3 \\right]}/6{{R}[2]} ). This constitutes the most precise comparison of integer resistance quantization (in terms of h/e 2) in single-particle systems and of fractional quantization in fractionally charged quasi-particle systems. While not relevant for practical metrology, such a test of the validity of the underlying physics is of significance in the context of the upcoming revision of the SI.

  2. Becchi-Rouet-Stora-Tyutin formalism and zero locus reduction

    NASA Astrophysics Data System (ADS)

    Grigoriev, M. A.; Semikhatov, A. M.; Tipunin, I. Yu.

    2001-08-01

    In the Becchi-Rouet-Stora-Tyutin (BRST) quantization of gauge theories, the zero locus ZQ of the BRST differential Q carries an (anti)bracket whose parity is opposite to that of the fundamental bracket. Observables of the BRST theory are in a 1:1 correspondence with Casimir functions of the bracket on ZQ. For any constrained dynamical system with the phase space N0 and the constraint surface Σ, we prove its equivalence to the constrained system on the BFV-extended phase space with the constraint surface given by ZQ. Reduction to the zero locus of the differential gives rise to relations between bracket operations and differentials arising in different complexes (the Gerstenhaber, Schouten, Berezin-Kirillov, and Sklyanin brackets); the equation ensuring the existence of a nilpotent vector field on the reduced manifold can be the classical Yang-Baxter equation. We also generalize our constructions to the bi-QP manifolds which from the BRST theory viewpoint correspond to the BRST-anti-BRST-symmetric quantization.

  3. Fast and efficient search for MPEG-4 video using adjacent pixel intensity difference quantization histogram feature

    NASA Astrophysics Data System (ADS)

    Lee, Feifei; Kotani, Koji; Chen, Qiu; Ohmi, Tadahiro

    2010-02-01

    In this paper, a fast search algorithm for MPEG-4 video clips from video database is proposed. An adjacent pixel intensity difference quantization (APIDQ) histogram is utilized as the feature vector of VOP (video object plane), which had been reliably applied to human face recognition previously. Instead of fully decompressed video sequence, partially decoded data, namely DC sequence of the video object are extracted from the video sequence. Combined with active search, a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by total 15 hours of video contained of TV programs such as drama, talk, news, etc. to search for given 200 MPEG-4 video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 2 % in drama and news categories are achieved, which are more accurately and robust than conventional fast video search algorithm.

  4. Pose Invariant Face Recognition Based on Hybrid Dominant Frequency Features

    NASA Astrophysics Data System (ADS)

    Wijaya, I. Gede Pasek Suta; Uchimura, Keiichi; Hu, Zhencheng

    Face recognition is one of the most active research areas in pattern recognition, not only because the face is a human biometric characteristics of human being but also because there are many potential applications of the face recognition which range from human-computer interactions to authentication, security, and surveillance. This paper presents an approach to pose invariant human face image recognition. The proposed scheme is based on the analysis of discrete cosine transforms (DCT) and discrete wavelet transforms (DWT) of face images. From both the DCT and DWT domain coefficients, which describe the facial information, we build compact and meaningful features vector, using simple statistical measures and quantization. This feature vector is called as the hybrid dominant frequency features. Then, we apply a combination of the L2 and Lq metric to classify the hybrid dominant frequency features to a person's class. The aim of the proposed system is to overcome the high memory space requirement, the high computational load, and the retraining problems of previous methods. The proposed system is tested using several face databases and the experimental results are compared to a well-known Eigenface method. The proposed method shows good performance, robustness, stability, and accuracy without requiring geometrical normalization. Furthermore, the purposed method has low computational cost, requires little memory space, and can overcome retraining problem.

  5. Resolution-Adaptive Hybrid MIMO Architectures for Millimeter Wave Communications

    NASA Astrophysics Data System (ADS)

    Choi, Jinseok; Evans, Brian L.; Gatherer, Alan

    2017-12-01

    In this paper, we propose a hybrid analog-digital beamforming architecture with resolution-adaptive ADCs for millimeter wave (mmWave) receivers with large antenna arrays. We adopt array response vectors for the analog combiners and derive ADC bit-allocation (BA) solutions in closed form. The BA solutions reveal that the optimal number of ADC bits is logarithmically proportional to the RF chain's signal-to-noise ratio raised to the 1/3 power. Using the solutions, two proposed BA algorithms minimize the mean square quantization error of received analog signals under a total ADC power constraint. Contributions of this paper include 1) ADC bit-allocation algorithms to improve communication performance of a hybrid MIMO receiver, 2) approximation of the capacity with the BA algorithm as a function of channels, and 3) a worst-case analysis of the ergodic rate of the proposed MIMO receiver that quantifies system tradeoffs and serves as the lower bound. Simulation results demonstrate that the BA algorithms outperform a fixed-ADC approach in both spectral and energy efficiency, and validate the capacity and ergodic rate formula. For a power constraint equivalent to that of fixed 4-bit ADCs, the revised BA algorithm makes the quantization error negligible while achieving 22% better energy efficiency. Having negligible quantization error allows existing state-of-the-art digital beamformers to be readily applied to the proposed system.

  6. A robust H.264/AVC video watermarking scheme with drift compensation.

    PubMed

    Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  7. A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation

    PubMed Central

    Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376

  8. Identifying images of handwritten digits using deep learning in H2O

    NASA Astrophysics Data System (ADS)

    Sadhasivam, Jayakumar; Charanya, R.; Kumar, S. Harish; Srinivasan, A.

    2017-11-01

    Automatic digit recognition is of popular interest today. Deep learning techniques make it possible for object recognition in image data. Perceiving the digit has turned into a fundamental part as far as certifiable applications. Since, digits are composed in various styles in this way to distinguish the digit it is important to perceive and arrange it with the assistance of machine learning methods. This exploration depends on supervised learning vector quantization neural system arranged under counterfeit artificial neural network. The pictures of digits are perceived, prepared and tried. After the system is made digits are prepared utilizing preparing dataset vectors and testing is connected to the pictures of digits which are separated to each other by fragmenting the picture and resizing the digit picture as needs be for better precision.

  9. Error analysis in a stereo vision-based pedestrian detection sensor for collision avoidance applications.

    PubMed

    Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.

  10. Simultaneous compression and encryption for secure real-time secure transmission of sensitive video transmission

    NASA Astrophysics Data System (ADS)

    Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.

    2014-05-01

    Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.

  11. Error Analysis in a Stereo Vision-Based Pedestrian Detection Sensor for Collision Avoidance Applications

    PubMed Central

    Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323

  12. Classification of postural profiles among mouth-breathing children by learning vector quantization.

    PubMed

    Mancini, F; Sousa, F S; Hummel, A D; Falcão, A E J; Yi, L C; Ortolani, C F; Sigulem, D; Pisa, I T

    2011-01-01

    Mouth breathing is a chronic syndrome that may bring about postural changes. Finding characteristic patterns of changes occurring in the complex musculoskeletal system of mouth-breathing children has been a challenge. Learning vector quantization (LVQ) is an artificial neural network model that can be applied for this purpose. The aim of the present study was to apply LVQ to determine the characteristic postural profiles shown by mouth-breathing children, in order to further understand abnormal posture among mouth breathers. Postural training data on 52 children (30 mouth breathers and 22 nose breathers) and postural validation data on 32 children (22 mouth breathers and 10 nose breathers) were used. The performance of LVQ and other classification models was compared in relation to self-organizing maps, back-propagation applied to multilayer perceptrons, Bayesian networks, naive Bayes, J48 decision trees, k, and k-nearest-neighbor classifiers. Classifier accuracy was assessed by means of leave-one-out cross-validation, area under ROC curve (AUC), and inter-rater agreement (Kappa statistics). By using the LVQ model, five postural profiles for mouth-breathing children could be determined. LVQ showed satisfactory results for mouth-breathing and nose-breathing classification: sensitivity and specificity rates of 0.90 and 0.95, respectively, when using the training dataset, and 0.95 and 0.90, respectively, when using the validation dataset. The five postural profiles for mouth-breathing children suggested by LVQ were incorporated into application software for classifying the severity of mouth breathers' abnormal posture.

  13. Quantum games of opinion formation based on the Marinatto-Weber quantum game scheme

    NASA Astrophysics Data System (ADS)

    Deng, Xinyang; Deng, Yong; Liu, Qi; Shi, Lei; Wang, Zhen

    2016-06-01

    Quantization has become a new way to investigate classical game theory since quantum strategies and quantum games were proposed. In the existing studies, many typical game models, such as the prisoner's dilemma, battle of the sexes, Hawk-Dove game, have been extensively explored by using quantization approach. Along a similar method, here several game models of opinion formations will be quantized on the basis of the Marinatto-Weber quantum game scheme, a frequently used scheme of converting classical games to quantum versions. Our results show that the quantization can fascinatingly change the properties of some classical opinion formation game models so as to generate win-win outcomes.

  14. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  15. Integral Sliding Mode Fault-Tolerant Control for Uncertain Linear Systems Over Networks With Signals Quantization.

    PubMed

    Hao, Li-Ying; Park, Ju H; Ye, Dan

    2017-09-01

    In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.

  16. The canonical quantization of chaotic maps on the torus

    NASA Astrophysics Data System (ADS)

    Rubin, Ron Shai

    In this thesis, a quantization method for classical maps on the torus is presented. The quantum algebra of observables is defined as the quantization of measurable functions on the torus with generators exp (2/pi ix) and exp (2/pi ip). The Hilbert space we use remains the infinite-dimensional L2/ (/IR, dx). The dynamics is given by a unitary quantum propagator such that as /hbar /to 0, the classical dynamics is returned. We construct such a quantization for the Kronecker map, the cat map, the baker's map, the kick map, and the Harper map. For the cat map, we find the same for the propagator on the plane the same integral kernel conjectured in (HB) using semiclassical methods. We also define a quantum 'integral over phase space' as a trace over the quantum algebra. Using this definition, we proceed to define quantum ergodicity and mixing for maps on the torus. We prove that the quantum cat map and Kronecker map are both ergodic, but only the cat map is mixing, true to its classical origins. For Planck's constant satisfying the integrality condition h = 1/N, with N/in doubz+, we construct an explicit isomorphism between L2/ (/IR, dx) and the Hilbert space of sections of an N-dimensional vector bundle over a θ-torus T2 of boundary conditions. The basis functions are distributions in L2/ (/IR, dx), given by an infinite comb of Dirac δ-functions. In Bargmann space these distributions take on the form of Jacobi ϑ-functions. Transformations from position to momentum representation can be implemented via a finite N-dimensional discrete Fourier transform. With the θ-torus, we provide a connection between the finite-dimensional quantum maps given in the physics literature and the canonical quantization presented here and found in the language of pseudo-differential operators elsewhere in mathematics circles. Specifically, at a fixed point of the dynamics on the θ-torus, we return a finite-dimensional matrix propagator. We present this connection explicitly for several examples.

  17. Probabilistic distance-based quantizer design for distributed estimation

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Hak

    2016-12-01

    We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.

  18. Quantized phase coding and connected region labeling for absolute phase retrieval.

    PubMed

    Chen, Xiangcheng; Wang, Yuwei; Wang, Yajun; Ma, Mengchao; Zeng, Chunnian

    2016-12-12

    This paper proposes an absolute phase retrieval method for complex object measurement based on quantized phase-coding and connected region labeling. A specific code sequence is embedded into quantized phase of three coded fringes. Connected regions of different codes are labeled and assigned with 3-digit-codes combining the current period and its neighbors. Wrapped phase, more than 36 periods, can be restored with reference to the code sequence. Experimental results verify the capability of the proposed method to measure multiple isolated objects.

  19. The wavelet/scalar quantization compression standard for digital fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  20. Rotating effects on the Landau quantization for an atom with a magnetic quadrupole moment

    NASA Astrophysics Data System (ADS)

    Fonseca, I. C.; Bakke, K.

    2016-01-01

    Based on the single particle approximation [Dmitriev et al., Phys. Rev. C 50, 2358 (1994) and C.-C. Chen, Phys. Rev. A 51, 2611 (1995)], the Landau quantization associated with an atom with a magnetic quadrupole moment is introduced, and then, rotating effects on this analogue of the Landau quantization is investigated. It is shown that rotating effects can modify the cyclotron frequency and breaks the degeneracy of the analogue of the Landau levels.

  1. Rotating effects on the Landau quantization for an atom with a magnetic quadrupole moment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca, I. C.; Bakke, K., E-mail: kbakke@fisica.ufpb.br

    2016-01-07

    Based on the single particle approximation [Dmitriev et al., Phys. Rev. C 50, 2358 (1994) and C.-C. Chen, Phys. Rev. A 51, 2611 (1995)], the Landau quantization associated with an atom with a magnetic quadrupole moment is introduced, and then, rotating effects on this analogue of the Landau quantization is investigated. It is shown that rotating effects can modify the cyclotron frequency and breaks the degeneracy of the analogue of the Landau levels.

  2. Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps

    NASA Technical Reports Server (NTRS)

    Gerson, Ira A.; Jasiuk, Mark A.

    1990-01-01

    Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.

  3. Gravitational surface Hamiltonian and entropy quantization

    NASA Astrophysics Data System (ADS)

    Bakshi, Ashish; Majhi, Bibhas Ranjan; Samanta, Saurav

    2017-02-01

    The surface Hamiltonian corresponding to the surface part of a gravitational action has xp structure where p is conjugate momentum of x. Moreover, it leads to TS on the horizon of a black hole. Here T and S are temperature and entropy of the horizon. Imposing the hermiticity condition we quantize this Hamiltonian. This leads to an equidistant spectrum of its eigenvalues. Using this we show that the entropy of the horizon is quantized. This analysis holds for any order of Lanczos-Lovelock gravity. For general relativity, the area spectrum is consistent with Bekenstein's observation. This provides a more robust confirmation of this earlier result as the calculation is based on the direct quantization of the Hamiltonian in the sense of usual quantum mechanics.

  4. The electronic structure of Au25 clusters: between discrete and continuous

    NASA Astrophysics Data System (ADS)

    Katsiev, Khabiboulakh; Lozova, Nataliya; Wang, Lu; Sai Krishna, Katla; Li, Ruipeng; Mei, Wai-Ning; Skrabalak, Sara E.; Kumar, Challa S. S. R.; Losovyj, Yaroslav

    2016-08-01

    Here, an approach based on synchrotron resonant photoemission is employed to explore the transition between quantization and hybridization of the electronic structure in atomically precise ligand-stabilized nanoparticles. While the presence of ligands maintains quantization in Au25 clusters, their removal renders increased hybridization of the electronic states in the vicinity of the Fermi level. These observations are supported by DFT studies.Here, an approach based on synchrotron resonant photoemission is employed to explore the transition between quantization and hybridization of the electronic structure in atomically precise ligand-stabilized nanoparticles. While the presence of ligands maintains quantization in Au25 clusters, their removal renders increased hybridization of the electronic states in the vicinity of the Fermi level. These observations are supported by DFT studies. Electronic supplementary information (ESI) available: Experimental details including chemicals, sample preparation, and characterization methods. Computation techniques, SV-AUC, GIWAXS, XPS, UPS, MALDI-TOF, ESI data of Au25 clusters. See DOI: 10.1039/c6nr02374f

  5. Distributed memory approaches for robotic neural controllers

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1990-01-01

    The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.

  6. Automatic Generation of Caricatures with Multiple Expressions Using Transformative Approach

    NASA Astrophysics Data System (ADS)

    Liao, Wen-Hung; Lai, Chien-An

    The proliferation of digital cameras has changed the way we create and share photos. Novel forms of photo composition and reproduction have surfaced in recent years. In this paper, we present an automatic caricature generation system using transformative approaches. By combing facial feature detection, image segmentation and image warping/morphing techniques, the system is able to generate stylized caricature using only one reference image. When more than one reference sample are available, the system can either choose the best fit based on shape matching, or synthesize a composite style using polymorph technique. The system can also produce multiple expressions by controlling a subset of MPEG-4 facial animation parameters (FAP). Finally, to enable flexible manipulation of the synthetic caricature, we also investigate issues such as color quantization and raster-to-vector conversion. A major strength of our method is that the synthesized caricature bears a higher degree of resemblance to the real person than traditional component-based approaches.

  7. A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor.

    PubMed

    Yu, Changwei; Nie, Kaiming; Xu, Jiangtao; Gao, Jing

    2016-09-23

    In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively.

  8. Common spatial pattern combined with kernel linear discriminate and generalized radial basis function for motor imagery-based brain computer interface applications

    NASA Astrophysics Data System (ADS)

    Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko

    2018-04-01

    Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.

  9. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    PubMed

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindstrom, P; Cohen, J D

    We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression onmore » the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.« less

  11. On-chip integratable all-optical quantizer using strong cross-phase modulation in a silicon-organic hybrid slot waveguide

    PubMed Central

    Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Sang, Xinzhu; Wang, Kuiru; Wu, Qiang; Yan, Binbin; Li, Feng; Zhou, Xian; Zhong, Kangping; Zhou, Guiyao; Yu, Chongxiu; Farrell, Gerald; Lu, Chao; Yaw Tam, Hwa; Wai, P. K. A.

    2016-01-01

    High performance all-optical quantizer based on silicon waveguide is believed to have significant applications in photonic integratable optical communication links, optical interconnection networks, and real-time signal processing systems. In this paper, we propose an integratable all-optical quantizer for on-chip and low power consumption all-optical analog-to-digital converters. The quantization is realized by the strong cross-phase modulation and interference in a silicon-organic hybrid (SOH) slot waveguide based Mach-Zehnder interferometer. By carefully designing the dimension of the SOH waveguide, large nonlinear coefficients up to 16,000 and 18,069 W−1/m for the pump and probe signals can be obtained respectively, along with a low pulse walk-off parameter of 66.7 fs/mm, and all-normal dispersion in the wavelength regime considered. Simulation results show that the phase shift of the probe signal can reach 8π at a low pump pulse peak power of 206 mW and propagation length of 5 mm such that a 4-bit all-optical quantizer can be realized. The corresponding signal-to-noise ratio is 23.42 dB and effective number of bit is 3.89-bit. PMID:26777054

  12. Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Dong, Jing; Tan, Tieniu

    With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.

  13. Poisson traces, D-modules, and symplectic resolutions

    NASA Astrophysics Data System (ADS)

    Etingof, Pavel; Schedler, Travis

    2018-03-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  14. Poisson traces, D-modules, and symplectic resolutions.

    PubMed

    Etingof, Pavel; Schedler, Travis

    2018-01-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  15. Synthesis of a combined system for precise stabilization of the Spektr-UF observatory: II

    NASA Astrophysics Data System (ADS)

    Bychkov, I. V.; Voronov, V. A.; Druzhinin, E. I.; Kozlov, R. I.; Ul'yanov, S. A.; Belyaev, B. B.; Telepnev, P. P.; Ul'yashin, A. I.

    2014-03-01

    The paper presents the second part of the results of search studies for the development of a combined system of high-precision stabilization of the optical telescope for the designed Spectr-UF international observatory [1]. A new modification of the strict method of the synthesis of nonlinear discrete-continuous stabilization systems with uncertainties is described, which is based on the minimization of the guaranteed accuracy estimate calculated using vector Lyapunov functions. Using this method, the synthesis of the feedback parameters in the mode of precise inertial stabilization of the optical telescope axis is performed taking the design nonrigidity, quantization of signals over time and level, and errors of orientation meters, as well as the errors and limitation of control moments of executive engine-flywheels into account. The results of numerical experiments that demonstrate the quality of the synthesized system are presented.

  16. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  17. Analytical study of bound states in graphene nanoribbons and carbon nanotubes: The variable phase method and the relativistic Levinson theorem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miserev, D. S., E-mail: d.miserev@student.unsw.edu.au, E-mail: erazorheader@gmail.com

    2016-06-15

    The problem of localized states in 1D systems with a relativistic spectrum, namely, graphene stripes and carbon nanotubes, is studied analytically. The bound state as a superposition of two chiral states is completely described by their relative phase, which is the foundation of the variable phase method (VPM) developed herein. Based on our VPM, we formulate and prove the relativistic Levinson theorem. The problem of bound states can be reduced to the analysis of closed trajectories of some vector field. Remarkably, the Levinson theorem appears as the Poincaré index theorem for these closed trajectories. The VPM equation is also reducedmore » to the nonrelativistic and semiclassical limits. The limit of a small momentum p{sub y} of transverse quantization is applicable to an arbitrary integrable potential. In this case, a single confined mode is predicted.« less

  18. Comparing the performance of two CBIRS indexing schemes

    NASA Astrophysics Data System (ADS)

    Mueller, Wolfgang; Robbert, Guenter; Henrich, Andreas

    2003-01-01

    Content based image retrieval (CBIR) as it is known today has to deal with a number of challenges. Quickly summarized, the main challenges are firstly, to bridge the semantic gap between high-level concepts and low-level features using feedback, secondly to provide performance under adverse conditions. High-dimensional spaces, as well as a demanding machine learning task make the right way of indexing an important issue. When indexing multimedia data, most groups opt for extraction of high-dimensional feature vectors from the data, followed by dimensionality reduction like PCA (Principal Components Analysis) or LSI (Latent Semantic Indexing). The resulting vectors are indexed using spatial indexing structures such as kd-trees or R-trees, for example. Other projects, such as MARS and Viper propose the adaptation of text indexing techniques, notably the inverted file. Here, the Viper system is the most direct adaptation of text retrieval techniques to quantized vectors. However, while the Viper query engine provides decent performance together with impressive user-feedback behavior, as well as the possibility for easy integration of long-term learning algorithms, and support for potentially infinite feature vectors, there has been no comparison of vector-based methods and inverted-file-based methods under similar conditions. In this publication, we compare a CBIR query engine that uses inverted files (Bothrops, a rewrite of the Viper query engine based on a relational database), and a CBIR query engine based on LSD (Local Split Decision) trees for spatial indexing using the same feature sets. The Benchathlon initiative works on providing a set of images and ground truth for simulating image queries by example and corresponding user feedback. When performing the Benchathlon benchmark on a CBIR system (the System Under Test, SUT), a benchmarking harness connects over internet to the SUT, performing a number of queries using an agreed-upon protocol, the multimedia retrieval markup language (MRML). Using this benchmark one can measure the quality of retrieval, as well as the overall (speed) performance of the benchmarked system. Our Benchmarks will draw on the Benchathlon"s work for documenting the retrieval performance of both inverted file-based and LSD tree based techniques. However in addition to these results, we will present statistics, that can be obtained only inside the system under test. These statistics will include the number of complex mathematical operations, as well as the amount of data that has to be read from disk during operation of a query.

  19. Detecting double compression of audio signal

    NASA Astrophysics Data System (ADS)

    Yang, Rui; Shi, Yun Q.; Huang, Jiwu

    2010-01-01

    MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.

  20. Development of a brain MRI-based hidden Markov model for dementia recognition

    PubMed Central

    2013-01-01

    Background Dementia is an age-related cognitive decline which is indicated by an early degeneration of cortical and sub-cortical structures. Characterizing those morphological changes can help to understand the disease development and contribute to disease early prediction and prevention. But modeling that can best capture brain structural variability and can be valid in both disease classification and interpretation is extremely challenging. The current study aimed to establish a computational approach for modeling the magnetic resonance imaging (MRI)-based structural complexity of the brain using the framework of hidden Markov models (HMMs) for dementia recognition. Methods Regularity dimension and semi-variogram were used to extract structural features of the brains, and vector quantization method was applied to convert extracted feature vectors to prototype vectors. The output VQ indices were then utilized to estimate parameters for HMMs. To validate its accuracy and robustness, experiments were carried out on individuals who were characterized as non-demented and mild Alzheimer's diseased. Four HMMs were constructed based on the cohort of non-demented young, middle-aged, elder and demented elder subjects separately. Classification was carried out using a data set including both non-demented and demented individuals with a wide age range. Results The proposed HMMs have succeeded in recognition of individual who has mild Alzheimer's disease and achieved a better classification accuracy compared to other related works using different classifiers. Results have shown the ability of the proposed modeling for recognition of early dementia. Conclusion The findings from this research will allow individual classification to support the early diagnosis and prediction of dementia. By using the brain MRI-based HMMs developed in our proposed research, it will be more efficient, robust and can be easily used by clinicians as a computer-aid tool for validating imaging bio-markers for early prediction of dementia. PMID:24564961

  1. Controlling charge quantization with quantum fluctuations.

    PubMed

    Jezouin, S; Iftikhar, Z; Anthore, A; Parmentier, F D; Gennser, U; Cavanna, A; Ouerghi, A; Levkivskyi, I P; Idrisov, E; Sukhorukov, E V; Glazman, L I; Pierre, F

    2016-08-04

    In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal-semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices.

  2. Approaching the Planck scale from a generally relativistic point of view: A philosophical appraisal of loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Wuthrich, Christian

    My dissertation studies the foundations of loop quantum gravity (LQG), a candidate for a quantum theory of gravity based on classical general relativity. At the outset, I discuss two---and I claim separate---questions: first, do we need a quantum theory of gravity at all; and second, if we do, does it follow that gravity should or even must be quantized? My evaluation of different arguments either way suggests that while no argument can be considered conclusive, there are strong indications that gravity should be quantized. LQG attempts a canonical quantization of general relativity and thereby provokes a foundational interest as it must take a stance on many technical issues tightly linked to the interpretation of general relativity. Most importantly, it codifies general relativity's main innovation, the so-called background independence, in a formalism suitable for quantization. This codification pulls asunder what has been joined together in general relativity: space and time. It is thus a central issue whether or not general relativity's four-dimensional structure can be retrieved in the alternative formalism and how it fares through the quantization process. I argue that the rightful four-dimensional spacetime structure can only be partially retrieved at the classical level. What happens at the quantum level is an entirely open issue. Known examples of classically singular behaviour which gets regularized by quantization evoke an admittedly pious hope that the singularities which notoriously plague the classical theory may be washed away by quantization. This work scrutinizes pronouncements claiming that the initial singularity of classical cosmological models vanishes in quantum cosmology based on LQG and concludes that these claims must be severely qualified. In particular, I explicate why casting the quantum cosmological models in terms of a deterministic temporal evolution fails to capture the concepts at work adequately. Finally, a scheme is developed of how the re-emergence of the smooth spacetime from the underlying discrete quantum structure could be understood.

  3. Toward a perceptual image quality assessment of color quantized images

    NASA Astrophysics Data System (ADS)

    Frackiewicz, Mariusz; Palus, Henryk

    2018-04-01

    Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.

  4. Haralick texture features from apparent diffusion coefficient (ADC) MRI images depend on imaging and pre-processing parameters.

    PubMed

    Brynolfsson, Patrik; Nilsson, David; Torheim, Turid; Asklund, Thomas; Karlsson, Camilla Thellenberg; Trygg, Johan; Nyholm, Tufve; Garpebring, Anders

    2017-06-22

    In recent years, texture analysis of medical images has become increasingly popular in studies investigating diagnosis, classification and treatment response assessment of cancerous disease. Despite numerous applications in oncology and medical imaging in general, there is no consensus regarding texture analysis workflow, or reporting of parameter settings crucial for replication of results. The aim of this study was to assess how sensitive Haralick texture features of apparent diffusion coefficient (ADC) MR images are to changes in five parameters related to image acquisition and pre-processing: noise, resolution, how the ADC map is constructed, the choice of quantization method, and the number of gray levels in the quantized image. We found that noise, resolution, choice of quantization method and the number of gray levels in the quantized images had a significant influence on most texture features, and that the effect size varied between different features. Different methods for constructing the ADC maps did not have an impact on any texture feature. Based on our results, we recommend using images with similar resolutions and noise levels, using one quantization method, and the same number of gray levels in all quantized images, to make meaningful comparisons of texture feature results between different subjects.

  5. Rate-distortion analysis of dead-zone plus uniform threshold scalar quantization and its application--part II: two-pass VBR coding for H.264/AVC.

    PubMed

    Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming

    2013-01-01

    In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.

  6. Three learning phases for radial-basis-function networks.

    PubMed

    Schwenker, F; Kestler, H A; Palm, G

    2001-05-01

    In this paper, learning algorithms for radial basis function (RBF) networks are discussed. Whereas multilayer perceptrons (MLP) are typically trained with backpropagation algorithms, starting the training procedure with a random initialization of the MLP's parameters, an RBF network may be trained in many different ways. We categorize these RBF training methods into one-, two-, and three-phase learning schemes. Two-phase RBF learning is a very common learning scheme. The two layers of an RBF network are learnt separately; first the RBF layer is trained, including the adaptation of centers and scaling parameters, and then the weights of the output layer are adapted. RBF centers may be trained by clustering, vector quantization and classification tree algorithms, and the output layer by supervised learning (through gradient descent or pseudo inverse solution). Results from numerical experiments of RBF classifiers trained by two-phase learning are presented in three completely different pattern recognition applications: (a) the classification of 3D visual objects; (b) the recognition hand-written digits (2D objects); and (c) the categorization of high-resolution electrocardiograms given as a time series (ID objects) and as a set of features extracted from these time series. In these applications, it can be observed that the performance of RBF classifiers trained with two-phase learning can be improved through a third backpropagation-like training phase of the RBF network, adapting the whole set of parameters (RBF centers, scaling parameters, and output layer weights) simultaneously. This, we call three-phase learning in RBF networks. A practical advantage of two- and three-phase learning in RBF networks is the possibility to use unlabeled training data for the first training phase. Support vector (SV) learning in RBF networks is a different learning approach. SV learning can be considered, in this context of learning, as a special type of one-phase learning, where only the output layer weights of the RBF network are calculated, and the RBF centers are restricted to be a subset of the training data. Numerical experiments with several classifier schemes including k-nearest-neighbor, learning vector quantization and RBF classifiers trained through two-phase, three-phase and support vector learning are given. The performance of the RBF classifiers trained through SV learning and three-phase learning are superior to the results of two-phase learning, but SV learning often leads to complex network structures, since the number of support vectors is not a small fraction of the total number of data points.

  7. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    PubMed

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  8. Memory-efficient decoding of LDPC codes

    NASA Technical Reports Server (NTRS)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  9. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.; Vallely, D. P.

    1978-01-01

    This paper considers digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. A quantization error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. The program can be integrated into existing digital simulations of a system.

  10. Phase-Quantized Block Noncoherent Communication

    DTIC Science & Technology

    2013-07-01

    2828 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 61, NO. 7, JULY 2013 Phase-Quantized Block Noncoherent Communication Jaspreet Singh and Upamanyu...in a carrier asynchronous system. Specifically, we consider transmission over the block noncoherent additive white Gaussian noise channel, and...block noncoherent channel. Several results, based on the symmetry inherent in the channel model, are provided to characterize this transition density

  11. Event-triggered H∞ state estimation for semi-Markov jumping discrete-time neural networks with quantization.

    PubMed

    Rakkiyappan, R; Maheswari, K; Velmurugan, G; Park, Ju H

    2018-05-17

    This paper investigates H ∞ state estimation problem for a class of semi-Markovian jumping discrete-time neural networks model with event-triggered scheme and quantization. First, a new event-triggered communication scheme is introduced to determine whether or not the current sampled sensor data should be broad-casted and transmitted to the quantizer, which can save the limited communication resource. Second, a novel communication framework is employed by the logarithmic quantizer that quantifies and reduces the data transmission rate in the network, which apparently improves the communication efficiency of networks. Third, a stabilization criterion is derived based on the sufficient condition which guarantees a prescribed H ∞ performance level in the estimation error system in terms of the linear matrix inequalities. Finally, numerical simulations are given to illustrate the correctness of the proposed scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.

    PubMed

    Li, Yeqing; Liu, Wei; Huang, Junzhou

    2018-06-01

    Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.

  13. Electrically pumped graphene-based Landau-level laser

    NASA Astrophysics Data System (ADS)

    Brem, Samuel; Wendler, Florian; Winnerl, Stephan; Malic, Ermin

    2018-03-01

    Graphene exhibits a nonequidistant Landau quantization with tunable Landau-level (LL) transitions in the technologically desired terahertz spectral range. Here, we present a strategy for an electrically driven terahertz laser based on Landau-quantized graphene as the gain medium. Performing microscopic modeling of the coupled electron, phonon, and photon dynamics in such a laser, we reveal that an inter-LL population inversion can be achieved resulting in the emission of coherent terahertz radiation. The presented paper provides a concrete recipe for the experimental realization of tunable graphene-based terahertz laser systems.

  14. Measurement analysis and quantum gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albers, Mark; Kiefer, Claus; Reginatto, Marcel

    2008-09-15

    We consider the question of whether consistency arguments based on measurement theory show that the gravitational field must be quantized. Motivated by the argument of Eppley and Hannah, we apply a DeWitt-type measurement analysis to a coupled system that consists of a gravitational wave interacting with a mass cube. We also review the arguments of Eppley and Hannah and of DeWitt, and investigate a second model in which a gravitational wave interacts with a quantized scalar field. We argue that one cannot conclude from the existing gedanken experiments that gravity has to be quantized. Despite the many physical arguments whichmore » speak in favor of a quantum theory of gravity, it appears that the justification for such a theory must be based on empirical tests and does not follow from logical arguments alone.« less

  15. Quantized conductance operation near a single-atom point contact in a polymer-based atomic switch

    NASA Astrophysics Data System (ADS)

    Krishnan, Karthik; Muruganathan, Manoharan; Tsuruoka, Tohru; Mizuta, Hiroshi; Aono, Masakazu

    2017-06-01

    Highly-controlled conductance quantization is achieved near a single-atom point contact in a redox-based atomic switch device, in which a poly(ethylene oxide) (PEO) film is sandwiched between Ag and Pt electrodes. Current-voltage measurements revealed reproducible quantized conductance of ˜1G 0 for more than 102 continuous voltage sweep cycles under a specific condition, indicating the formation of a well-defined single-atom point contact of Ag in the PEO matrix. The device exhibited a conductance state distribution centered at 1G 0, with distinct half-integer multiples of G 0 and small fractional variations. First-principles density functional theory simulations showed that the experimental observations could be explained by the existence of a tunneling gap and the structural rearrangement of an atomic point contact.

  16. Two-terminal video coding.

    PubMed

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  17. Electro-gravity via geometric chrononfield

    NASA Astrophysics Data System (ADS)

    Suchard, Eytan H.

    2017-05-01

    In De Sitter / Anti De Sitter space-time and in other geometries, reference sub-manifolds from which proper time is measured along integral curves, are described as events. We introduce here a foliation with the help of a scalar field. The scalar field need not be unique but from the gradient of the scalar field, an intrinsic Reeb vector of the foliations perpendicular to the gradient vector is calculated. The Reeb vector describes the acceleration of a physical particle that moves along the integral curves that are formed by the gradient of the scalar field. The Reeb vector appears as a component of an anti-symmetric matrix which is a part of a rank-2, 2-Form. The 2-form is extended into a non-degenerate 4-form and into rank-4 matrix of a 2-form, which when multiplied by a velocity of a particle, becomes the acceleration of the particle. The matrix has one U(1) degree of freedom and an additional SU(2) degrees of freedom in two vectors that span the plane perpendicular to the gradient of the scalar field and to the Reeb vector. In total, there are U(1) x SU(2) degrees of freedom. SU(3) degrees of freedom arise from three dimensional foliations but require an additional symmetry to exist in order to have a valid covariant meaning. Matter in the Einstein Grossmann equation is replaced by the action of the acceleration field, i.e. by a geometric action which is not anticipated by the metric alone. This idea leads to a new formalism that replaces the conventional stress-energy-momentum-tensor. The formalism will be mainly developed for classical physics but will also be discussed for quantized physics based on events instead of particles. The result is that a positive charge manifests small attracting gravity and a stronger but small repelling acceleration field that repels even uncharged particles that have a rest mass. Negative charge manifests a repelling anti-gravity but also a stronger acceleration field that attracts even uncharged particles that have rest mass. Preliminary version: http://sciencedomain.org/abstract/9858

  18. An Alternative to the Gauge Theoretic Setting

    NASA Astrophysics Data System (ADS)

    Schroer, Bert

    2011-10-01

    The standard formulation of quantum gauge theories results from the Lagrangian (functional integral) quantization of classical gauge theories. A more intrinsic quantum theoretical access in the spirit of Wigner's representation theory shows that there is a fundamental clash between the pointlike localization of zero mass (vector, tensor) potentials and the Hilbert space (positivity, unitarity) structure of QT. The quantization approach has no other way than to stay with pointlike localization and sacrifice the Hilbert space whereas the approach built on the intrinsic quantum concept of modular localization keeps the Hilbert space and trades the conflict creating pointlike generation with the tightest consistent localization: semiinfinite spacelike string localization. Whereas these potentials in the presence of interactions stay quite close to associated pointlike field strengths, the interacting matter fields to which they are coupled bear the brunt of the nonlocal aspect in that they are string-generated in a way which cannot be undone by any differentiation. The new stringlike approach to gauge theory also revives the idea of a Schwinger-Higgs screening mechanism as a deeper and less metaphoric description of the Higgs spontaneous symmetry breaking and its accompanying tale about "God's particle" and its mass generation for all the other particles.

  19. Covariant open bosonic string field theory on multiple D-branes in the proper-time gauge

    NASA Astrophysics Data System (ADS)

    Lee, Taejin

    2017-12-01

    We construct a covariant open bosonic string field theory on multiple D-branes, which reduces to a non-Abelian group Yang-Mills gauge theory in the zero-slope limit. Making use of the first quantized open bosonic string in the proper time gauge, we convert the string amplitudes given by the Polyakov path integrals on string world sheets into those of the second quantized theory. The world sheet diagrams generated by the constructed open string field theory are planar in contrast to those of the Witten's cubic string field theory. However, the constructed string field theory is yet equivalent to the Witten's cubic string field theory. Having obtained planar diagrams, we may adopt the light-cone string field theory technique to calculate the multi-string scattering amplitudes with an arbitrary number of external strings. We examine in detail the three-string vertex diagram and the effective four-string vertex diagrams generated perturbatively by the three-string vertex at tree level. In the zero-slope limit, the string scattering amplitudes are identified precisely as those of non-Abelian Yang-Mills gauge theory if the external states are chosen to be massless vector particles.

  20. Metaplectic-c Quantomorphisms

    NASA Astrophysics Data System (ADS)

    Vaughan, Jennifer

    2015-03-01

    In the classical Kostant-Souriau prequantization procedure, the Poisson algebra of a symplectic manifold (M,ω) is realized as the space of infinitesimal quantomorphisms of the prequantization circle bundle. Robinson and Rawnsley developed an alternative to the Kostant-Souriau quantization process in which the prequantization circle bundle and metaplectic structure for (M,ω) are replaced by a metaplectic-c prequantization. They proved that metaplectic-c quantization can be applied to a larger class of manifolds than the classical recipe. This paper presents a definition for a metaplectic-c quantomorphism, which is a diffeomorphism of metaplectic-c prequantizations that preserves all of their structures. Since the structure of a metaplectic-c prequantization is more complicated than that of a circle bundle, we find that the definition must include an extra condition that does not have an analogue in the Kostant-Souriau case. We then define an infinitesimal quantomorphism to be a vector field whose flow consists of metaplectic-c quantomorphisms, and prove that the space of infinitesimal metaplectic-c quantomorphisms exhibits all of the same properties that are seen for the infinitesimal quantomorphisms of a prequantization circle bundle. In particular, this space is isomorphic to the Poisson algebra C^∞(M).

  1. Classical Field Theory and the Stress-Energy Tensor

    NASA Astrophysics Data System (ADS)

    Swanson, Mark S.

    2015-09-01

    This book is a concise introduction to the key concepts of classical field theory for beginning graduate students and advanced undergraduate students who wish to study the unifying structures and physical insights provided by classical field theory without dealing with the additional complication of quantization. In that regard, there are many important aspects of field theory that can be understood without quantizing the fields. These include the action formulation, Galilean and relativistic invariance, traveling and standing waves, spin angular momentum, gauge invariance, subsidiary conditions, fluctuations, spinor and vector fields, conservation laws and symmetries, and the Higgs mechanism, all of which are often treated briefly in a course on quantum field theory. The variational form of classical mechanics and continuum field theory are both developed in the time-honored graduate level text by Goldstein et al (2001). An introduction to classical field theory from a somewhat different perspective is available in Soper (2008). Basic classical field theory is often treated in books on quantum field theory. Two excellent texts where this is done are Greiner and Reinhardt (1996) and Peskin and Schroeder (1995). Green's function techniques are presented in Arfken et al (2013).

  2. Prediction-guided quantization for video tone mapping

    NASA Astrophysics Data System (ADS)

    Le Dauphin, Agnès.; Boitard, Ronan; Thoreau, Dominique; Olivier, Yannick; Francois, Edouard; LeLéannec, Fabrice

    2014-09-01

    Tone Mapping Operators (TMOs) compress High Dynamic Range (HDR) content to address Low Dynamic Range (LDR) displays. However, before reaching the end-user, this tone mapped content is usually compressed for broadcasting or storage purposes. Any TMO includes a quantization step to convert floating point values to integer ones. In this work, we propose to adapt this quantization, in the loop of an encoder, to reduce the entropy of the tone mapped video content. Our technique provides an appropriate quantization for each mode of both the Intra and Inter-prediction that is performed in the loop of a block-based encoder. The mode that minimizes a rate-distortion criterion uses its associated quantization to provide integer values for the rest of the encoding process. The method has been implemented in HEVC and was tested over two different scenarios: the compression of tone mapped LDR video content (using the HM10.0) and the compression of perceptually encoded HDR content (HM14.0). Results show an average bit-rate reduction under the same PSNR for all the sequences and TMO considered of 20.3% and 27.3% for tone mapped content and 2.4% and 2.7% for HDR content.

  3. Quantization of an electromagnetic field in two-dimensional photonic structures based on the scattering matrix formalism ( S-quantization)

    NASA Astrophysics Data System (ADS)

    Ivanov, K. A.; Nikolaev, V. V.; Gubaydullin, A. R.; Kaliteevski, M. A.

    2017-10-01

    Based on the scattering matrix formalism, we have developed a method of quantization of an electromagnetic field in two-dimensional photonic nanostructures ( S-quantization in the two-dimensional case). In this method, the fields at the boundaries of the quantization box are expanded into a Fourier series and are related with each other by the scattering matrix of the system, which is the product of matrices describing the propagation of plane waves in empty regions of the quantization box and the scattering matrix of the photonic structure (or an arbitrary inhomogeneity). The quantization condition (similarly to the onedimensional case) is formulated as follows: the eigenvalues of the scattering matrix are equal to unity, which corresponds to the fact that the set of waves that are incident on the structure (components of the expansion into the Fourier series) is equal to the set of waves that travel away from the structure (outgoing waves). The coefficients of the matrix of scattering through the inhomogeneous structure have been calculated using the following procedure: the structure is divided into parallel layers such that the permittivity in each layer varies only along the axis that is perpendicular to the layers. Using the Fourier transform, the Maxwell equations have been written in the form of a matrix that relates the Fourier components of the electric field at the boundaries of neighboring layers. The product of these matrices is the transfer matrix in the basis of the Fourier components of the electric field. Represented in a block form, it is composed by matrices that contain the reflection and transmission coefficients for the Fourier components of the field, which, in turn, constitute the scattering matrix. The developed method considerably simplifies the calculation scheme for the analysis of the behavior of the electromagnetic field in structures with a two-dimensional inhomogeneity. In addition, this method makes it possible to obviate difficulties that arise in the analysis of the Purcell effect because of the divergence of the integral describing the effective volume of the mode in open systems.

  4. Segmentation-based L-filtering of speckle noise in ultrasonic images

    NASA Astrophysics Data System (ADS)

    Kofidis, Eleftherios; Theodoridis, Sergios; Kotropoulos, Constantine L.; Pitas, Ioannis

    1994-05-01

    We introduce segmentation-based L-filters, that is, filtering processes combining segmentation and (nonadaptive) optimum L-filtering, and use them for the suppression of speckle noise in ultrasonic (US) images. With the aid of a suitable modification of the learning vector quantizer self-organizing neural network, the image is segmented in regions of approximately homogeneous first-order statistics. For each such region a minimum mean-squared error L- filter is designed on the basis of a multiplicative noise model by using the histogram of grey values as an estimate of the parent distribution of the noisy observations and a suitable estimate of the original signal in the corresponding region. Thus, we obtain a bank of L-filters that are corresponding to and are operating on different image regions. Simulation results on a simulated US B-mode image of a tissue mimicking phantom are presented which verify the superiority of the proposed method as compared to a number of conventional filtering strategies in terms of a suitably defined signal-to-noise ratio measure and detection theoretic performance measures.

  5. Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1993-01-01

    Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.

  6. High-resolution frequency measurement method with a wide-frequency range based on a quantized phase step law.

    PubMed

    Du, Baoqiang; Dong, Shaofeng; Wang, Yanfeng; Guo, Shuting; Cao, Lingzhi; Zhou, Wei; Zuo, Yandi; Liu, Dan

    2013-11-01

    A wide-frequency and high-resolution frequency measurement method based on the quantized phase step law is presented in this paper. Utilizing a variation law of the phase differences, the direct different frequency phase processing, and the phase group synchronization phenomenon, combining an A/D converter and the adaptive phase shifting principle, a counter gate is established in the phase coincidences at one-group intervals, which eliminates the ±1 counter error in the traditional frequency measurement method. More importantly, the direct phase comparison, the measurement, and the control between any periodic signals have been realized without frequency normalization in this method. Experimental results show that sub-picosecond resolution can be easily obtained in the frequency measurement, the frequency standard comparison, and the phase-locked control based on the phase quantization processing technique. The method may be widely used in navigation positioning, space techniques, communication, radar, astronomy, atomic frequency standards, and other high-tech fields.

  7. Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.

    PubMed

    Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan

    2018-04-01

    In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.

  8. Mathematics of Quantization and Quantum Fields

    NASA Astrophysics Data System (ADS)

    Dereziński, Jan; Gérard, Christian

    2013-03-01

    Preface; 1. Vector spaces; 2. Operators in Hilbert spaces; 3. Tensor algebras; 4. Analysis in L2(Rd); 5. Measures; 6. Algebras; 7. Anti-symmetric calculus; 8. Canonical commutation relations; 9. CCR on Fock spaces; 10. Symplectic invariance of CCR in finite dimensions; 11. Symplectic invariance of the CCR on Fock spaces; 12. Canonical anti-commutation relations; 13. CAR on Fock spaces; 14. Orthogonal invariance of CAR algebras; 15. Clifford relations; 16. Orthogonal invariance of the CAR on Fock spaces; 17. Quasi-free states; 18. Dynamics of quantum fields; 19. Quantum fields on space-time; 20. Diagrammatics; 21. Euclidean approach for bosons; 22. Interacting bosonic fields; Subject index; Symbols index.

  9. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  10. Theoretical nuclear physics

    NASA Astrophysics Data System (ADS)

    Rost, E.; Shephard, J. R.

    1992-08-01

    This report discusses the following topics: Exact 1-loop vacuum polarization effects in 1 + 1 dimensional QHD; exact 1-fermion loop contributions in 1 + 1 dimensional solitons; exact scalar 1-loop contributions in 1 + 3 dimensions; exact vacuum calculations in a hyper-spherical basis; relativistic nuclear matter with self-consistent correlation energy; consistent RHA-RPA for finite nuclei; transverse response functions in the (triangle)-resonance region; hadronic matter in a nontopological soliton model; scalar and vector contributions to (bar p)p yields (bar lambda)lambda reaction; 0+ and 2+ strengths in pion double-charge exchange to double giant-dipole resonances; and nucleons in a hybrid sigma model including a quantized pion field.

  11. Multi-rate, real time image compression for images dominated by point sources

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.; Harris, Richard W.

    1993-01-01

    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovchavtsev, A. P., E-mail: kap@isp.nsc.ru; Tsarenko, A. V.; Guzev, A. A.

    The influence of electron energy quantization in a space-charge region on the accumulation capacitance of the InAs-based metal-oxide-semiconductor capacitors (MOSCAPs) has been investigated by modeling and comparison with the experimental data from Au/anodic layer(4-20 nm)/n-InAs(111)A MOSCAPs. The accumulation capacitance for MOSCAPs has been calculated by the solution of Poisson equation with different assumptions and the self-consistent solution of Schrödinger and Poisson equations with quantization taken into account. It was shown that the quantization during the MOSCAPs accumulation capacitance calculations should be taken into consideration for the correct interface states density determination by Terman method and the evaluation of gate dielectric thicknessmore » from capacitance-voltage measurements.« less

  13. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1973-01-01

    The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.

  14. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  15. The electronic structure of Au25 clusters: between discrete and continuous.

    PubMed

    Katsiev, Khabiboulakh; Lozova, Nataliya; Wang, Lu; Sai Krishna, Katla; Li, Ruipeng; Mei, Wai-Ning; Skrabalak, Sara E; Kumar, Challa S S R; Losovyj, Yaroslav

    2016-08-21

    Here, an approach based on synchrotron resonant photoemission is employed to explore the transition between quantization and hybridization of the electronic structure in atomically precise ligand-stabilized nanoparticles. While the presence of ligands maintains quantization in Au25 clusters, their removal renders increased hybridization of the electronic states in the vicinity of the Fermi level. These observations are supported by DFT studies.

  16. BRST quantization of cosmological perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armendariz-Picon, Cristian; Şengör, Gizem

    2016-11-08

    BRST quantization is an elegant and powerful method to quantize theories with local symmetries. In this article we study the Hamiltonian BRST quantization of cosmological perturbations in a universe dominated by a scalar field, along with the closely related quantization method of Dirac. We describe how both formalisms apply to perturbations in a time-dependent background, and how expectation values of gauge-invariant operators can be calculated in the in-in formalism. Our analysis focuses mostly on the free theory. By appropriate canonical transformations we simplify and diagonalize the free Hamiltonian. BRST quantization in derivative gauges allows us to dramatically simplify the structuremore » of the propagators, whereas Dirac quantization, which amounts to quantization in synchronous gauge, dispenses with the need to introduce ghosts and preserves the locality of the gauge-fixed action.« less

  17. Deformation of second and third quantization

    NASA Astrophysics Data System (ADS)

    Faizal, Mir

    2015-03-01

    In this paper, we will deform the second and third quantized theories by deforming the canonical commutation relations in such a way that they become consistent with the generalized uncertainty principle. Thus, we will first deform the second quantized commutator and obtain a deformed version of the Wheeler-DeWitt equation. Then we will further deform the third quantized theory by deforming the third quantized canonical commutation relation. This way we will obtain a deformed version of the third quantized theory for the multiverse.

  18. Method and system employing finite state machine modeling to identify one of a plurality of different electric load types

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Liang; Yang, Yi; Harley, Ronald Gordon

    A system is for a plurality of different electric load types. The system includes a plurality of sensors structured to sense a voltage signal and a current signal for each of the different electric loads; and a processor. The processor acquires a voltage and current waveform from the sensors for a corresponding one of the different electric load types; calculates a power or current RMS profile of the waveform; quantizes the power or current RMS profile into a set of quantized state-values; evaluates a state-duration for each of the quantized state-values; evaluates a plurality of state-types based on the powermore » or current RMS profile and the quantized state-values; generates a state-sequence that describes a corresponding finite state machine model of a generalized load start-up or transient profile for the corresponding electric load type; and identifies the corresponding electric load type.« less

  19. Quantization ambiguities and bounds on geometric scalars in anisotropic loop quantum cosmology

    NASA Astrophysics Data System (ADS)

    Singh, Parampreet; Wilson-Ewing, Edward

    2014-02-01

    We study quantization ambiguities in loop quantum cosmology that arise for space-times with non-zero spatial curvature and anisotropies. Motivated by lessons from different possible loop quantizations of the closed Friedmann-Lemaître-Robertson-Walker cosmology, we find that using open holonomies of the extrinsic curvature, which due to gauge-fixing can be treated as a connection, leads to the same quantum geometry effects that are found in spatially flat cosmologies. More specifically, in contrast to the quantization based on open holonomies of the Ashtekar-Barbero connection, the expansion and shear scalars in the effective theories of the Bianchi type II and Bianchi type IX models have upper bounds, and these are in exact agreement with the bounds found in the effective theories of the Friedmann-Lemaître-Robertson-Walker and Bianchi type I models in loop quantum cosmology. We also comment on some ambiguities present in the definition of inverse triad operators and their role.

  20. Path integral solution for a Klein-Gordon particle in vector and scalar deformed radial Rosen-Morse-type potentials

    NASA Astrophysics Data System (ADS)

    Khodja, A.; Kadja, A.; Benamira, F.; Guechi, L.

    2017-12-01

    The problem of a Klein-Gordon particle moving in equal vector and scalar Rosen-Morse-type potentials is solved in the framework of Feynman's path integral approach. Explicit path integration leads to a closed form for the radial Green's function associated with different shapes of the potentials. For q≤-1, and 1/2α ln | q|0, it is shown that the quantization conditions for the bound state energy levels E_{nr} are transcendental equations which can be solved numerically. Three special cases such as the standard radial Manning-Rosen potential (| q| =1), the standard radial Rosen-Morse potential (V2→ -V2,q=1) and the radial Eckart potential (V1→ -V1,q=1) are also briefly discussed.

  1. Meson effective mass in the isospin medium in hard-wall AdS/QCD model

    NASA Astrophysics Data System (ADS)

    Mamedov, Shahin

    2016-02-01

    We study a mass splitting of the light vector, axial-vector, and pseudoscalar mesons in the isospin medium in the framework of the hard-wall model. We write an effective mass definition for the interacting gauge fields and scalar field introduced in gauge field theory in the bulk of AdS space-time. Relying on holographic duality we obtain a formula for the effective mass of a boundary meson in terms of derivative operator over the extra bulk coordinate. The effective mass found in this way coincides with the one obtained from finding of poles of the two-point correlation function. In order to avoid introducing distinguished infrared boundaries in the quantization formula for the different mesons from the same isotriplet we introduce extra action terms at this boundary, which reduces distinguished values of this boundary to the same value. Profile function solutions and effective mass expressions were found for the in-medium ρ , a_1, and π mesons.

  2. Digital Correlation Microwave Polarimetry: Analysis and Demonstration

    NASA Technical Reports Server (NTRS)

    Piepmeier, J. R.; Gasiewski, A. J.; Krebs, Carolyn A. (Technical Monitor)

    2000-01-01

    The design, analysis, and demonstration of a digital-correlation microwave polarimeter for use in earth remote sensing is presented. We begin with an analysis of three-level digital correlation and develop the correlator transfer function and radiometric sensitivity. A fifth-order polynomial regression is derived for inverting the digital correlation coefficient into the analog statistic. In addition, the effects of quantizer threshold asymmetry and hysteresis are discussed. A two-look unpolarized calibration scheme is developed for identifying correlation offsets. The developed theory and calibration method are verified using a 10.7 GHz and a 37.0 GHz polarimeter. The polarimeters are based upon 1-GS/s three-level digital correlators and measure the first three Stokes parameters. Through experiment, the radiometric sensitivity is shown to approach the theoretical as derived earlier in the paper and the two-look unpolarized calibration method is successfully compared with results using a polarimetric scheme. Finally, sample data from an aircraft experiment demonstrates that the polarimeter is highly-useful for ocean wind-vector measurement.

  3. Skyrme insulators: insulators at the brink of superconductivity

    DOE PAGES

    Ertem, Onur; Chang, Po -Yao; Coleman, Piers; ...

    2017-08-04

    Current theories of superfluidity are based on the idea of a coherent quantum state with topologically protected, quantized circulation. When this topological protection is absent, as in the case of 3He-A, the coherent quantum state no longer supports persistent superflow. In this paper, we argue that the loss of topological protection in a superconductor gives rise to an insulating ground state. Specifically, we introduce the concept of a Skyrme insulator to describe the coherent dielectric state that results from the topological failure of superflow carried by a complex vector order parameter. Here, we apply this idea to the case ofmore » SmB6, arguing that the observation of a diamagnetic Fermi surface within an insulating bulk can be understood as a realization of this state. Our theory enables us to understand the linear specific heat of SmB6 in terms of a neutral Majorana Fermi sea and leads us to predict that in low fields of order a Gauss, SmB6 will develop a Meissner effect.« less

  4. Skyrme Insulators: Insulators at the Brink of Superconductivity

    NASA Astrophysics Data System (ADS)

    Erten, Onur; Chang, Po-Yao; Coleman, Piers; Tsvelik, Alexei M.

    2017-08-01

    Current theories of superfluidity are based on the idea of a coherent quantum state with topologically protected quantized circulation. When this topological protection is absent, as in the case of 3He -A , the coherent quantum state no longer supports persistent superflow. Here, we argue that the loss of topological protection in a superconductor gives rise to an insulating ground state. We specifically introduce the concept of a Skyrme insulator to describe the coherent dielectric state that results from the topological failure of superflow carried by a complex-vector order parameter. We apply this idea to the case of SmB6 , arguing that the observation of a diamagnetic Fermi surface within an insulating bulk can be understood as a realization of this state. Our theory enables us to understand the linear specific heat of SmB6 in terms of a neutral Majorana Fermi sea and leads us to predict that in low fields of order a Gauss, SmB6 will develop a Meissner effect.

  5. Performance analysis of clustering techniques over microarray data: A case study

    NASA Astrophysics Data System (ADS)

    Dash, Rasmita; Misra, Bijan Bihari

    2018-03-01

    Handling big data is one of the major issues in the field of statistical data analysis. In such investigation cluster analysis plays a vital role to deal with the large scale data. There are many clustering techniques with different cluster analysis approach. But which approach suits a particular dataset is difficult to predict. To deal with this problem a grading approach is introduced over many clustering techniques to identify a stable technique. But the grading approach depends on the characteristic of dataset as well as on the validity indices. So a two stage grading approach is implemented. In this study the grading approach is implemented over five clustering techniques like hybrid swarm based clustering (HSC), k-means, partitioning around medoids (PAM), vector quantization (VQ) and agglomerative nesting (AGNES). The experimentation is conducted over five microarray datasets with seven validity indices. The finding of grading approach that a cluster technique is significant is also established by Nemenyi post-hoc hypothetical test.

  6. Sleep versus wake classification from heart rate variability using computational intelligence: consideration of rejection in classification models.

    PubMed

    Lewicke, Aaron; Sazonov, Edward; Corwin, Michael J; Neuman, Michael; Schuckers, Stephanie

    2008-01-01

    Reliability of classification performance is important for many biomedical applications. A classification model which considers reliability in the development of the model such that unreliable segments are rejected would be useful, particularly, in large biomedical data sets. This approach is demonstrated in the development of a technique to reliably determine sleep and wake using only the electrocardiogram (ECG) of infants. Typically, sleep state scoring is a time consuming task in which sleep states are manually derived from many physiological signals. The method was tested with simultaneous 8-h ECG and polysomnogram (PSG) determined sleep scores from 190 infants enrolled in the collaborative home infant monitoring evaluation (CHIME) study. Learning vector quantization (LVQ) neural network, multilayer perceptron (MLP) neural network, and support vector machines (SVMs) are tested as the classifiers. After systematic rejection of difficult to classify segments, the models can achieve 85%-87% correct classification while rejecting only 30% of the data. This corresponds to a Kappa statistic of 0.65-0.68. With rejection, accuracy improves by about 8% over a model without rejection. Additionally, the impact of the PSG scored indeterminate state epochs is analyzed. The advantages of a reliable sleep/wake classifier based only on ECG include high accuracy, simplicity of use, and low intrusiveness. Reliability of the classification can be built directly in the model, such that unreliable segments are rejected.

  7. Topos quantum theory on quantization-induced sheaves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakayama, Kunji, E-mail: nakayama@law.ryukoku.ac.jp

    2014-10-15

    In this paper, we construct a sheaf-based topos quantum theory. It is well known that a topos quantum theory can be constructed on the topos of presheaves on the category of commutative von Neumann algebras of bounded operators on a Hilbert space. Also, it is already known that quantization naturally induces a Lawvere-Tierney topology on the presheaf topos. We show that a topos quantum theory akin to the presheaf-based one can be constructed on sheaves defined by the quantization-induced Lawvere-Tierney topology. That is, starting from the spectral sheaf as a state space of a given quantum system, we construct sheaf-basedmore » expressions of physical propositions and truth objects, and thereby give a method of truth-value assignment to the propositions. Furthermore, we clarify the relationship to the presheaf-based quantum theory. We give translation rules between the sheaf-based ingredients and the corresponding presheaf-based ones. The translation rules have “coarse-graining” effects on the spaces of the presheaf-based ingredients; a lot of different proposition presheaves, truth presheaves, and presheaf-based truth-values are translated to a proposition sheaf, a truth sheaf, and a sheaf-based truth-value, respectively. We examine the extent of the coarse-graining made by translation.« less

  8. Validation of a quantized-current source with 0.2 ppm uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Friederike; Fricke, Lukas, E-mail: lukas.fricke@ptb.de; Scherer, Hansjörg

    2015-09-07

    We report on high-accuracy measurements of quantized current, sourced by a tunable-barrier single-electron pump at frequencies f up to 1 GHz. The measurements were performed with an ultrastable picoammeter instrument, traceable to the Josephson and quantum Hall effects. Current quantization according to I = ef with e being the elementary charge was confirmed at f = 545 MHz with a total relative uncertainty of 0.2 ppm, improving the state of the art by about a factor of 5. The accuracy of a possible future quantum current standard based on single-electron transport was experimentally validated to be better than the best (indirect) realization of the ampere within themore » present SI.« less

  9. Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina

    2016-09-01

    The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.

  10. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  11. Modeling and analysis of energy quantization effects on single electron inverter performance

    NASA Astrophysics Data System (ADS)

    Dan, Surya Shankar; Mahapatra, Santanu

    2009-08-01

    In this paper, for the first time, the effects of energy quantization on single electron transistor (SET) inverter performance are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantization mainly changes the Coulomb blockade region and drain current of SET devices and thus affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new analytical model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. A compact expression is developed for a novel parameter quantization threshold which is introduced for the first time in this paper. Quantization threshold explicitly defines the maximum energy quantization that an SET inverter logic circuit can withstand before its noise margin falls below a specified tolerance level. It is found that SET inverter designed with CT:CG=1/3 (where CT and CG are tunnel junction and gate capacitances, respectively) offers maximum robustness against energy quantization.

  12. Perspectives of Light-Front Quantized Field Theory: Some New Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srivastava, Prem P.

    1999-08-13

    A review of some basic topics in the light-front (LF) quantization of relativistic field theory is made. It is argued that the LF quantization is equally appropriate as the conventional one and that they lead, assuming the microcausality principle, to the same physical content. This is confirmed in the studies on the LF of the spontaneous symmetry breaking (SSB), of the degenerate vacua in Schwinger model (SM) and Chiral SM (CSM), of the chiral boson theory, and of the QCD in covariant gauges among others. The discussion on the LF is more economical and more transparent than that found inmore » the conventional equal-time quantized theory. The removal of the constraints on the LF phase space by following the Dirac method, in fact, results in a substantially reduced number of independent dynamical variables. Consequently, the descriptions of the physical Hilbert space and the vacuum structure, for example, become more tractable. In the context of the Dyson-Wick perturbation theory the relevant propagators in the front form theory are causal. The Wick rotation can then be performed to employ the Euclidean space integrals in momentum space. The lack of manifest covariance becomes tractable, and still more so if we employ, as discussed in the text, the Fourier transform of the fermionic field based on a special construction of the LF spinor. The fact that the hyperplanes x{sup {+-}} = 0 constitute characteristic surfaces of the hyperbolic partial differential equation is found irrelevant in the quantized theory; it seems sufficient to quantize the theory on one of the characteristic hyperplanes.« less

  13. Quantization and Quantum-Like Phenomena: A Number Amplitude Approach

    NASA Astrophysics Data System (ADS)

    Robinson, T. R.; Haven, E.

    2015-12-01

    Historically, quantization has meant turning the dynamical variables of classical mechanics that are represented by numbers into their corresponding operators. Thus the relationships between classical variables determine the relationships between the corresponding quantum mechanical operators. Here, we take a radically different approach to this conventional quantization procedure. Our approach does not rely on any relations based on classical Hamiltonian or Lagrangian mechanics nor on any canonical quantization relations, nor even on any preconceptions of particle trajectories in space and time. Instead we examine the symmetry properties of certain Hermitian operators with respect to phase changes. This introduces harmonic operators that can be identified with a variety of cyclic systems, from clocks to quantum fields. These operators are shown to have the characteristics of creation and annihilation operators that constitute the primitive fields of quantum field theory. Such an approach not only allows us to recover the Hamiltonian equations of classical mechanics and the Schrödinger wave equation from the fundamental quantization relations, but also, by freeing the quantum formalism from any physical connotation, makes it more directly applicable to non-physical, so-called quantum-like systems. Over the past decade or so, there has been a rapid growth of interest in such applications. These include, the use of the Schrödinger equation in finance, second quantization and the number operator in social interactions, population dynamics and financial trading, and quantum probability models in cognitive processes and decision-making. In this paper we try to look beyond physical analogies to provide a foundational underpinning of such applications.

  14. Channel estimation based on quantized MMP for FDD massive MIMO downlink

    NASA Astrophysics Data System (ADS)

    Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie

    2016-10-01

    In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.

  15. Berezin-Toeplitz quantization and naturally defined star products for Kähler manifolds

    NASA Astrophysics Data System (ADS)

    Schlichenmaier, Martin

    2018-04-01

    For compact quantizable Kähler manifolds the Berezin-Toeplitz quantization schemes, both operator and deformation quantization (star product) are reviewed. The treatment includes Berezin's covariant symbols and the Berezin transform. The general compact quantizable case was done by Bordemann-Meinrenken-Schlichenmaier, Schlichenmaier, and Karabegov-Schlichenmaier. For star products on Kähler manifolds, separation of variables, or equivalently star product of (anti-) Wick type, is a crucial property. As canonically defined star products the Berezin-Toeplitz, Berezin, and the geometric quantization are treated. It turns out that all three are equivalent, but different.

  16. Consciousness of Unification: The Mind-Matter Phallacy Bites the Dust

    NASA Astrophysics Data System (ADS)

    Beichler, James E.

    A complete theoretical model of how consciousness arises in neural nets can be developed based on a mixed quantum/classical basis. Both mind and consciousness are multi-leveled scalar and vector electromagnetic complexity patterns, respectively, which emerge within all living organisms through the process of evolution. Like life, the mind and consciousness patterns extend throughout living organisms (bodies), but the neural nets and higher level groupings that distinguish higher levels of consciousness only exist in the brain so mind and consciousness have been traditionally associated with the brain alone. A close study of neurons and neural nets in the brain shows that the microtubules within axons are classical bio-magnetic inductors that emit and absorb electromagnetic pulses from each other. These pulses establish interference patterns that influence the quantized vector potential patterns of interstitial water molecules within the neurons as well as create the coherence within neurons and neural nets that scientists normally associate with more complex memories, thought processes and streams of thought. Memory storage and recall are guided by the microtubules and the actual memory patterns are stored as magnetic vector potential complexity patterns in the points of space at the quantum level occupied by the water molecules. This model also accounts for the plasticity of the brain and implies that mind and consciousness, like life itself, are the result of evolutionary processes. However, consciousness can evolve independent of an organism's birth genetics once it has evolved by normal bottom-up genetic processes and thus force a new type of top-down evolution on living organisms and species as a whole that can be explained by expanding the laws of thermodynamics to include orderly systems.

  17. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  18. Theory of free electron vortices

    PubMed Central

    Schattschneider, P.; Verbeeck, J.

    2011-01-01

    The recent creation of electron vortex beams and their first practical application motivates a better understanding of their properties. Here, we develop the theory of free electron vortices with quantized angular momentum, based on solutions of the Schrödinger equation for cylindrical boundary conditions. The principle of transformation of a plane wave into vortices with quantized angular momentum, their paraxial propagation through round magnetic lenses, and the effect of partial coherence are discussed. PMID:21930017

  19. Holographic anyonic superfluidity

    NASA Astrophysics Data System (ADS)

    Jokela, Niko; Lifschytz, Gilad; Lippert, Matthew

    2013-10-01

    Starting with a holographic construction for a fractional quantum Hall state based on the D3-D7' system, we explore alternative quantization conditions for the bulk gauge fields. This gives a description of a quantum Hall state with various filling fractions. For a particular alternative quantization of the bulk gauge fields, we obtain a holographic anyon fluid in a vanishing background magnetic field. We show that this system is a superfluid, exhibiting the relevant gapless excitation.

  20. Dielectric properties of classical and quantized ionic fluids.

    PubMed

    Høye, Johan S

    2010-06-01

    We study time-dependent correlation functions of classical and quantum gases using methods of equilibrium statistical mechanics for systems of uniform as well as nonuniform densities. The basis for our approach is the path integral formalism of quantum mechanical systems. With this approach the statistical mechanics of a quantum mechanical system becomes the equivalent of a classical polymer problem in four dimensions where imaginary time is the fourth dimension. Several nontrivial results for quantum systems have been obtained earlier by this analogy. Here, we will focus upon the presence of a time-dependent electromagnetic pair interaction where the electromagnetic vector potential that depends upon currents, will be present. Thus both density and current correlations are needed to evaluate the influence of this interaction. Then we utilize that densities and currents can be expressed by polarizations by which the ionic fluid can be regarded as a dielectric one for which a nonlocal susceptibility is found. This nonlocality has as a consequence that we find no contribution from a possible transverse electric zero-frequency mode for the Casimir force between metallic plates. Further, we establish expressions for a leading correction to ab initio calculations for the energies of the quantized electrons of molecules where now retardation effects also are taken into account.

  1. Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    1990-01-01

    A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.

  2. A neural net based architecture for the segmentation of mixed gray-level and binary pictures

    NASA Technical Reports Server (NTRS)

    Tabatabai, Ali; Troudet, Terry P.

    1991-01-01

    A neural-net-based architecture is proposed to perform segmentation in real time for mixed gray-level and binary pictures. In this approach, the composite picture is divided into 16 x 16 pixel blocks, which are identified as character blocks or image blocks on the basis of a dichotomy measure computed by an adaptive 16 x 16 neural net. For compression purposes, each image block is further divided into 4 x 4 subblocks; a one-bit nonparametric quantizer is used to encode 16 x 16 character and 4 x 4 image blocks; and the binary map and quantizer levels are obtained through a neural net segmentor over each block. The efficiency of the neural segmentation in terms of computational speed, data compression, and quality of the compressed picture is demonstrated. The effect of weight quantization is also discussed. VLSI implementations of such adaptive neural nets in CMOS technology are described and simulated in real time for a maximum block size of 256 pixels.

  3. First integrals of motion in a gauge covariant framework, Killing-Maxwell system and quantum anomalies

    NASA Astrophysics Data System (ADS)

    Visinescu, M.

    2012-10-01

    Hidden symmetries in a covariant Hamiltonian framework are investigated. The special role of the Stackel-Killing and Killing-Yano tensors is pointed out. The covariant phase-space is extended to include external gauge fields and scalar potentials. We investigate the possibility for a higher-order symmetry to survive when the electromagnetic interactions are taken into account. Aconcrete realization of this possibility is given by the Killing-Maxwell system. The classical conserved quantities do not generally transfer to the quantized systems producing quantum gravitational anomalies. As a rule the conformal extension of the Killing vectors and tensors does not produce symmetry operators for the Klein-Gordon operator.

  4. Asymptotics of the evolution semigroup associated with a scalar field in the presence of a non-linear electromagnetic field

    NASA Astrophysics Data System (ADS)

    Albeverio, Sergio; Tamura, Hiroshi

    2018-04-01

    We consider a model describing the coupling of a vector-valued and a scalar homogeneous Markovian random field over R4, interpreted as expressing the interaction between a charged scalar quantum field coupled with a nonlinear quantized electromagnetic field. Expectations of functionals of the random fields are expressed by Brownian bridges. Using this, together with Feynman-Kac-Itô type formulae and estimates on the small time and large time behaviour of Brownian functionals, we prove asymptotic upper and lower bounds on the kernel of the transition semigroup for our model. The upper bound gives faster than exponential decay for large distances of the corresponding resolvent (propagator).

  5. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  6. Coherent distributions for the rigid rotator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigorescu, Marius

    2016-06-15

    Coherent solutions of the classical Liouville equation for the rigid rotator are presented as positive phase-space distributions localized on the Lagrangian submanifolds of Hamilton-Jacobi theory. These solutions become Wigner-type quasiprobability distributions by a formal discretization of the left-invariant vector fields from their Fourier transform in angular momentum. The results are consistent with the usual quantization of the anisotropic rotator, but the expected value of the Hamiltonian contains a finite “zero point” energy term. It is shown that during the time when a quasiprobability distribution evolves according to the Liouville equation, the related quantum wave function should satisfy the time-dependent Schrödingermore » equation.« less

  7. SAR data compression: Application, requirements, and designs

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Chang, C. Y.

    1991-01-01

    The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.

  8. Quantum theory of structured monochromatic light

    NASA Astrophysics Data System (ADS)

    Punnoose, Alexander; Tu, J. J.

    2017-08-01

    Applications that envisage utilizing the orbital angular momentum (OAM) at the single photon level assume that the OAM degrees of freedom of the photons are orthogonal. To test this critical assumption, we quantize the beam-like solutions of the vector Helmholtz equation from first principles. We show that although the photon operators of a diffracting monochromatic beam do not in general satisfy the canonical commutation relations, implying that the photon states in Fock space are not orthogonal, the states are bona fide eigenstates of the number and Hamiltonian operators. As a result, the representation for the photon operators presented in this work form a natural basis to study structured monochromatic light at the single photon level.

  9. Quantization and fractional quantization of currents in periodically driven stochastic systems. I. Average currents

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Klein, John R.; Sinitsyn, Nikolai A.

    2012-04-01

    This article studies Markovian stochastic motion of a particle on a graph with finite number of nodes and periodically time-dependent transition rates that satisfy the detailed balance condition at any time. We show that under general conditions, the currents in the system on average become quantized or fractionally quantized for adiabatic driving at sufficiently low temperature. We develop the quantitative theory of this quantization and interpret it in terms of topological invariants. By implementing the celebrated Kirchhoff theorem we derive a general and explicit formula for the average generated current that plays a role of an efficient tool for treating the current quantization effects.

  10. Velocity-tunable slow beams of cold O2 in a single spin-rovibronic state with full angular-momentum orientation by multistage Zeeman deceleration

    NASA Astrophysics Data System (ADS)

    Wiederkehr, A. W.; Schmutz, H.; Motsch, M.; Merkt, F.

    2012-08-01

    Cold samples of oxygen molecules in supersonic beams have been decelerated from initial velocities of 390 and 450 m s-1 to final velocities in the range between 150 and 280 m s-1 using a 90-stage Zeeman decelerator. (2 + 1) resonance-enhanced-multiphoton-ionization (REMPI) spectra of the 3sσ g 3Π g (C) ? two-photon transition of O2 have been recorded to characterize the state selectivity of the deceleration process. The decelerated molecular sample was found to consist exclusively of molecules in the J ‧‧ = 2 spin-rotational component of the X ? ground state of O2. Measurements of the REMPI spectra using linearly polarized laser radiation with polarization vector parallel to the decelerator axis, and thus to the magnetic-field vector of the deceleration solenoids, further showed that only the ? magnetic sublevel of the N‧‧ = 1, J ‧‧ = 2 spin-rotational level is populated in the decelerated sample, which therefore is characterized by a fully oriented total-angular-momentum vector. By maintaining a weak quantization magnetic field beyond the decelerator, the polarization of the sample could be maintained over the 5 cm distance separating the last deceleration solenoid and the detection region.

  11. Two generalizations of Kohonen clustering

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.; Pal, Nikhil R.; Tsao, Eric C. K.

    1993-01-01

    The relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms is discussed. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. The impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often leads ideas to clustering algorithms is discussed. Then two generalizations of LVQ that are explicitly designed as clustering algorithms are presented; these algorithms are referred to as generalized LVQ = GLVQ; and fuzzy LVQ = FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution - these are taken care of automatically. Segmentation of a gray tone image is used as a typical application of these algorithms to illustrate the performance of GLVQ/FLVQ.

  12. Perceptual Optimization of DCT Color Quantization Matrices

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.

  13. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    PubMed

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.

  14. Feynman formulae and phase space Feynman path integrals for tau-quantization of some Lévy-Khintchine type Hamilton functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butko, Yana A., E-mail: yanabutko@yandex.ru, E-mail: kinderknecht@math.uni-sb.de; Grothaus, Martin, E-mail: grothaus@mathematik.uni-kl.de; Smolyanov, Oleg G., E-mail: Smolyanov@yandex.ru

    2016-02-15

    Evolution semigroups generated by pseudo-differential operators are considered. These operators are obtained by different (parameterized by a number τ) procedures of quantization from a certain class of functions (or symbols) defined on the phase space. This class contains Hamilton functions of particles with variable mass in magnetic and potential fields and more general symbols given by the Lévy-Khintchine formula. The considered semigroups are represented as limits of n-fold iterated integrals when n tends to infinity. Such representations are called Feynman formulae. Some of these representations are constructed with the help of another pseudo-differential operator, obtained by the same procedure ofmore » quantization; such representations are called Hamiltonian Feynman formulae. Some representations are based on integral operators with elementary kernels; these are called Lagrangian Feynman formulae. Langrangian Feynman formulae provide approximations of evolution semigroups, suitable for direct computations and numerical modeling of the corresponding dynamics. Hamiltonian Feynman formulae allow to represent the considered semigroups by means of Feynman path integrals. In the article, a family of phase space Feynman pseudomeasures corresponding to different procedures of quantization is introduced. The considered evolution semigroups are represented as phase space Feynman path integrals with respect to these Feynman pseudomeasures, i.e., different quantizations correspond to Feynman path integrals with the same integrand but with respect to different pseudomeasures. This answers Berezin’s problem of distinguishing a procedure of quantization on the language of Feynman path integrals. Moreover, the obtained Lagrangian Feynman formulae allow also to calculate these phase space Feynman path integrals and to connect them with some functional integrals with respect to probability measures.« less

  15. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1993-01-01

    The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.

  16. Characteristic extraction and matching algorithms of ballistic missile in near-space by hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Lu, Li; Sheng, Wen; Liu, Shihua; Zhang, Xianzhi

    2014-10-01

    The ballistic missile hyperspectral data of imaging spectrometer from the near-space platform are generated by numerical method. The characteristic of the ballistic missile hyperspectral data is extracted and matched based on two different kinds of algorithms, which called transverse counting and quantization coding, respectively. The simulation results show that two algorithms extract the characteristic of ballistic missile adequately and accurately. The algorithm based on the transverse counting has the low complexity and can be implemented easily compared to the algorithm based on the quantization coding does. The transverse counting algorithm also shows the good immunity to the disturbance signals and speed up the matching and recognition of subsequent targets.

  17. New fast DCT algorithms based on Loeffler's factorization

    NASA Astrophysics Data System (ADS)

    Hong, Yoon Mi; Kim, Il-Koo; Lee, Tammy; Cheon, Min-Su; Alshina, Elena; Han, Woo-Jin; Park, Jeong-Hoon

    2012-10-01

    This paper proposes a new 32-point fast discrete cosine transform (DCT) algorithm based on the Loeffler's 16-point transform. Fast integer realizations of 16-point and 32-point transforms are also provided based on the proposed transform. For the recent development of High Efficiency Video Coding (HEVC), simplified quanti-zation and de-quantization process are proposed. Three different forms of implementation with the essentially same performance, namely matrix multiplication, partial butterfly, and full factorization can be chosen accord-ing to the given platform. In terms of the number of multiplications required for the realization, our proposed full-factorization is 3~4 times faster than a partial butterfly, and about 10 times faster than direct matrix multiplication.

  18. Simultaneous Conduction and Valence Band Quantization in Ultrashallow High-Density Doping Profiles in Semiconductors

    NASA Astrophysics Data System (ADS)

    Mazzola, F.; Wells, J. W.; Pakpour-Tabrizi, A. C.; Jackman, R. B.; Thiagarajan, B.; Hofmann, Ph.; Miwa, J. A.

    2018-01-01

    We demonstrate simultaneous quantization of conduction band (CB) and valence band (VB) states in silicon using ultrashallow, high-density, phosphorus doping profiles (so-called Si:P δ layers). We show that, in addition to the well-known quantization of CB states within the dopant plane, the confinement of VB-derived states between the subsurface P dopant layer and the Si surface gives rise to a simultaneous quantization of VB states in this narrow region. We also show that the VB quantization can be explained using a simple particle-in-a-box model, and that the number and energy separation of the quantized VB states depend on the depth of the P dopant layer beneath the Si surface. Since the quantized CB states do not show a strong dependence on the dopant depth (but rather on the dopant density), it is straightforward to exhibit control over the properties of the quantized CB and VB states independently of each other by choosing the dopant density and depth accordingly, thus offering new possibilities for engineering quantum matter.

  19. From classical to quantum mechanics: ``How to translate physical ideas into mathematical language''

    NASA Astrophysics Data System (ADS)

    Bergeron, H.

    2001-09-01

    Following previous works by E. Prugovečki [Physica A 91A, 202 (1978) and Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)] on common features of classical and quantum mechanics, we develop a unified mathematical framework for classical and quantum mechanics (based on L2-spaces over classical phase space), in order to investigate to what extent quantum mechanics can be obtained as a simple modification of classical mechanics (on both logical and analytical levels). To obtain this unified framework, we split quantum theory in two parts: (i) general quantum axiomatics (a system is described by a state in a Hilbert space, observables are self-adjoints operators, and so on) and (ii) quantum mechanics proper that specifies the Hilbert space as L2(Rn); the Heisenberg rule [pi,qj]=-iℏδij with p=-iℏ∇, the free Hamiltonian H=-ℏ2Δ/2m and so on. We show that general quantum axiomatics (up to a supplementary "axiom of classicity") can be used as a nonstandard mathematical ground to formulate physical ideas and equations of ordinary classical statistical mechanics. So, the question of a "true quantization" with "ℏ" must be seen as an independent physical problem not directly related with quantum formalism. At this stage, we show that this nonstandard formulation of classical mechanics exhibits a new kind of operation that has no classical counterpart: this operation is related to the "quantization process," and we show why quantization physically depends on group theory (the Galilei group). This analytical procedure of quantization replaces the "correspondence principle" (or canonical quantization) and allows us to map classical mechanics into quantum mechanics, giving all operators of quantum dynamics and the Schrödinger equation. The great advantage of this point of view is that quantization is based on concrete physical arguments and not derived from some "pure algebraic rule" (we exhibit also some limit of the correspondence principle). Moreover spins for particles are naturally generated, including an approximation of their interaction with magnetic fields. We also recover by this approach the semi-classical formalism developed by E. Prugovečki [Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)].

  20. Mathematic Model of Digital Control System with PID Regulator and Regular Step of Quantization with Information Transfer via the Channel of Plural Access

    NASA Astrophysics Data System (ADS)

    Abramov, G. V.; Emeljanov, A. E.; Ivashin, A. L.

    Theoretical bases for modeling a digital control system with information transfer via the channel of plural access and a regular quantization cycle are submitted. The theory of dynamic systems with random changes of the structure including elements of the Markov random processes theory is used for a mathematical description of a network control system. The characteristics of similar control systems are received. Experimental research of the given control systems is carried out.

  1. Second quantization in bit-string physics

    NASA Technical Reports Server (NTRS)

    Noyes, H. Pierre

    1993-01-01

    Using a new fundamental theory based on bit-strings, a finite and discrete version of the solutions of the free one particle Dirac equation as segmented trajectories with steps of length h/mc along the forward and backward light cones executed at velocity +/- c are derived. Interpreting the statistical fluctuations which cause the bends in these segmented trajectories as emission and absorption of radiation, these solutions are analogous to a fermion propagator in a second quantized theory. This allows us to interpret the mass parameter in the step length as the physical mass of the free particle. The radiation in interaction with it has the usual harmonic oscillator structure of a second quantized theory. How these free particle masses can be generated gravitationally using the combinatorial hierarchy sequence (3,10,137,2(sup 127) + 136), and some of the predictive consequences are sketched.

  2. Effect of signal intensity and camera quantization on laser speckle contrast analysis

    PubMed Central

    Song, Lipei; Elson, Daniel S.

    2012-01-01

    Laser speckle contrast analysis (LASCA) is limited to being a qualitative method for the measurement of blood flow and tissue perfusion as it is sensitive to the measurement configuration. The signal intensity is one of the parameters that can affect the contrast values due to the quantization of the signals by the camera and analog-to-digital converter (ADC). In this paper we deduce the theoretical relationship between signal intensity and contrast values based on the probability density function (PDF) of the speckle pattern and simplify it to a rational function. A simple method to correct this contrast error is suggested. The experimental results demonstrate that this relationship can effectively compensate the bias in contrast values induced by the quantized signal intensity and correct for bias induced by signal intensity variations across the field of view. PMID:23304650

  3. Stochastic exponential synchronization of memristive neural networks with time-varying delays via quantized control.

    PubMed

    Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong

    2018-08-01

    This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Quantized Faraday and Kerr rotation and axion electrodynamics of a 3D topological insulator

    NASA Astrophysics Data System (ADS)

    Wu, Liang; Salehi, M.; Koirala, N.; Moon, J.; Oh, S.; Armitage, N. P.

    2016-12-01

    Topological insulators have been proposed to be best characterized as bulk magnetoelectric materials that show response functions quantized in terms of fundamental physical constants. Here, we lower the chemical potential of three-dimensional (3D) Bi2Se3 films to ~30 meV above the Dirac point and probe their low-energy electrodynamic response in the presence of magnetic fields with high-precision time-domain terahertz polarimetry. For fields higher than 5 tesla, we observed quantized Faraday and Kerr rotations, whereas the dc transport is still semiclassical. A nontrivial Berry’s phase offset to these values gives evidence for axion electrodynamics and the topological magnetoelectric effect. The time structure used in these measurements allows a direct measure of the fine-structure constant based on a topological invariant of a solid-state system.

  5. Third quantization

    NASA Astrophysics Data System (ADS)

    Seligman, Thomas H.; Prosen, Tomaž

    2010-12-01

    The basic ideas of second quantization and Fock space are extended to density operator states, used in treatments of open many-body systems. This can be done for fermions and bosons. While the former only requires the use of a non-orthogonal basis, the latter requires the introduction of a dual set of spaces. In both cases an operator algebra closely resembling the canonical one is developed and used to define the dual sets of bases. We here concentrated on the bosonic case where the unboundedness of the operators requires the definitions of dual spaces to support the pair of bases. Some applications, mainly to non-equilibrium steady states, will be mentioned.

  6. Third quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seligman, Thomas H.; Centro Internacional de Ciencias, Cuernavaca, Morelos; Prosen, Tomaz

    2010-12-23

    The basic ideas of second quantization and Fock space are extended to density operator states, used in treatments of open many-body systems. This can be done for fermions and bosons. While the former only requires the use of a non-orthogonal basis, the latter requires the introduction of a dual set of spaces. In both cases an operator algebra closely resembling the canonical one is developed and used to define the dual sets of bases. We here concentrated on the bosonic case where the unboundedness of the operators requires the definitions of dual spaces to support the pair of bases. Somemore » applications, mainly to non-equilibrium steady states, will be mentioned.« less

  7. Dimensional quantization effects in the thermodynamics of conductive filaments

    NASA Astrophysics Data System (ADS)

    Niraula, D.; Grice, C. R.; Karpov, V. G.

    2018-06-01

    We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.

  8. Nearly associative deformation quantization

    NASA Astrophysics Data System (ADS)

    Vassilevich, Dmitri; Oliveira, Fernando Martins Costa

    2018-04-01

    We study several classes of non-associative algebras as possible candidates for deformation quantization in the direction of a Poisson bracket that does not satisfy Jacobi identities. We show that in fact alternative deformation quantization algebras require the Jacobi identities on the Poisson bracket and, under very general assumptions, are associative. At the same time, flexible deformation quantization algebras exist for any Poisson bracket.

  9. Dimensional quantization effects in the thermodynamics of conductive filaments.

    PubMed

    Niraula, D; Grice, C R; Karpov, V G

    2018-06-29

    We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.

  10. Dynamics of entropy and nonclassical properties of the state of a Λ-type three-level atom interacting with a single-mode cavity field with intensity-dependent coupling in a Kerr medium

    NASA Astrophysics Data System (ADS)

    Faghihi, M. J.; Tavassoly, M. K.

    2012-02-01

    In this paper, we study the interaction between a three-level atom and a quantized single-mode field with ‘intensity-dependent coupling’ in a ‘Kerr medium’. The three-level atom is considered to be in a Λ-type configuration. Under particular initial conditions, which may be prepared for the atom and the field, the dynamical state vector of the entire system will be explicitly obtained, for the arbitrary nonlinearity function f(n) associated with any physical system. Then, after evaluating the variation of the field entropy against time, we will investigate the quantum statistics as well as some of the nonclassical properties of the introduced state. During our calculations we investigate the effects of intensity-dependent coupling, Kerr medium and detuning parameters on the depth and domain of the nonclassicality features of the atom-field state vector. Finally, we compare our obtained results with those of V-type three-level atoms.

  11. Face antispoofing based on frame difference and multilevel representation

    NASA Astrophysics Data System (ADS)

    Benlamoudi, Azeddine; Aiadi, Kamal Eddine; Ouafi, Abdelkrim; Samai, Djamel; Oussalah, Mourad

    2017-07-01

    Due to advances in technology, today's biometric systems become vulnerable to spoof attacks made by fake faces. These attacks occur when an intruder attempts to fool an established face-based recognition system by presenting a fake face (e.g., print photo or replay attacks) in front of the camera instead of the intruder's genuine face. For this purpose, face antispoofing has become a hot topic in face analysis literature, where several applications with antispoofing task have emerged recently. We propose a solution for distinguishing between real faces and fake ones. Our approach is based on extracting features from the difference between successive frames instead of individual frames. We also used a multilevel representation that divides the frame difference into multiple multiblocks. Different texture descriptors (local binary patterns, local phase quantization, and binarized statistical image features) have then been applied to each block. After the feature extraction step, a Fisher score is applied to sort the features in ascending order according to the associated weights. Finally, a support vector machine is used to differentiate between real and fake faces. We tested our approach on three publicly available databases: CASIA Face Antispoofing database, Replay-Attack database, and MSU Mobile Face Spoofing database. The proposed approach outperforms the other state-of-the-art methods in different media and quality metrics.

  12. Quantized Self-Assembly of Discotic Rings in a Liquid Crystal Confined in Nanopores

    NASA Astrophysics Data System (ADS)

    Sentker, Kathrin; Zantop, Arne W.; Lippmann, Milena; Hofmann, Tommy; Seeck, Oliver H.; Kityk, Andriy V.; Yildirim, Arda; Schönhals, Andreas; Mazza, Marco G.; Huber, Patrick

    2018-02-01

    Disklike molecules with aromatic cores spontaneously stack up in linear columns with high, one-dimensional charge carrier mobilities along the columnar axes, making them prominent model systems for functional, self-organized matter. We show by high-resolution optical birefringence and synchrotron-based x-ray diffraction that confining a thermotropic discotic liquid crystal in cylindrical nanopores induces a quantized formation of annular layers consisting of concentric circular bent columns, unknown in the bulk state. Starting from the walls this ring self-assembly propagates layer by layer towards the pore center in the supercooled domain of the bulk isotropic-columnar transition and thus allows one to switch on and off reversibly single, nanosized rings through small temperature variations. By establishing a Gibbs free energy phase diagram we trace the phase transition quantization to the discreteness of the layers' excess bend deformation energies in comparison to the thermal energy, even for this near room-temperature system. Monte Carlo simulations yielding spatially resolved nematic order parameters, density maps, and bond-orientational order parameters corroborate the universality and robustness of the confinement-induced columnar ring formation as well as its quantized nature.

  13. Topological quantization in units of the fine structure constant.

    PubMed

    Maciejko, Joseph; Qi, Xiao-Liang; Drew, H Dennis; Zhang, Shou-Cheng

    2010-10-15

    Fundamental topological phenomena in condensed matter physics are associated with a quantized electromagnetic response in units of fundamental constants. Recently, it has been predicted theoretically that the time-reversal invariant topological insulator in three dimensions exhibits a topological magnetoelectric effect quantized in units of the fine structure constant α=e²/ℏc. In this Letter, we propose an optical experiment to directly measure this topological quantization phenomenon, independent of material details. Our proposal also provides a way to measure the half-quantized Hall conductances on the two surfaces of the topological insulator independently of each other.

  14. Successive approximation-like 4-bit full-optical analog-to-digital converter based on Kerr-like nonlinear photonic crystal ring resonators

    NASA Astrophysics Data System (ADS)

    Tavousi, Alireza; Mansouri-Birjandi, Mohammad Ali; Saffari, Mehdi

    2016-09-01

    Implementing of photonic sampling and quantizing analog-to-digital converters (ADCs) enable us to extract a single binary word from optical signals without need for extra electronic assisting parts. This would enormously increase the sampling and quantizing time as well as decreasing the consumed power. To this end, based on the concept of successive approximation method, a 4-bit full-optical ADC that operates using the intensity-dependent Kerr-like nonlinearity in a two dimensional photonic crystal (2DPhC) platform is proposed. The Silicon (Si) nanocrystal is chosen because of the suitable nonlinear material characteristic. An optical limiter is used for the clamping and quantization of each successive levels that represent the ADC bits. In the proposal, an energy efficient optical ADC circuit is implemented by controlling the system parameters such as ring-to-waveguide coupling coefficients, the ring's nonlinear refractive index, and the ring's length. The performance of the ADC structure is verified by the simulation using finite difference time domain (FDTD) method.

  15. Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.

    PubMed

    Lan, Cuiling; Shi, Guangming; Wu, Feng

    2010-04-01

    Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

  16. On the Dequantization of Fedosov's Deformation Quantization

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    2003-08-01

    To each natural deformation quantization on a Poisson manifold M we associate a Poisson morphism from the formal neighborhood of the zero section of the cotangent bundle to M to the formal neighborhood of the diagonal of the product M x M~, where M~ is a copy of M with the opposite Poisson structure. We call it dequantization of the natural deformation quantization. Then we "dequantize" Fedosov's quantization.

  17. Digital watermarking algorithm research of color images based on quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    An, Mali; Wang, Weijiang; Zhao, Zhen

    2013-10-01

    A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.

  18. Feature weighting using particle swarm optimization for learning vector quantization classifier

    NASA Astrophysics Data System (ADS)

    Dongoran, A.; Rahmadani, S.; Zarlis, M.; Zakarias

    2018-03-01

    This paper discusses and proposes a method of feature weighting in classification assignments on competitive learning artificial neural network LVQ. The weighting feature method is the search for the weight of an attribute using the PSO so as to give effect to the resulting output. This method is then applied to the LVQ-Classifier and tested on the 3 datasets obtained from the UCI Machine Learning repository. Then an accuracy analysis will be generated by two approaches. The first approach using LVQ1, referred to as LVQ-Classifier and the second approach referred to as PSOFW-LVQ, is a proposed model. The result shows that the PSO algorithm is capable of finding attribute weights that increase LVQ-classifier accuracy.

  19. Three paths toward the quantum angle operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gazeau, Jean Pierre, E-mail: gazeau@apc.univ-paris7.fr; Szafraniec, Franciszek Hugon, E-mail: franciszek.szafraniec@uj.edu.pl

    2016-12-15

    We examine mathematical questions around angle (or phase) operator associated with a number operator through a short list of basic requirements. We implement three methods of construction of quantum angle. The first one is based on operator theory and parallels the definition of angle for the upper half-circle through its cosine and completed by a sign inversion. The two other methods are integral quantization generalizing in a certain sense the Berezin–Klauder approaches. One method pertains to Weyl–Heisenberg integral quantization of the plane viewed as the phase space of the motion on the line. It depends on a family of “weight”more » functions on the plane. The third method rests upon coherent state quantization of the cylinder viewed as the phase space of the motion on the circle. The construction of these coherent states depends on a family of probability distributions on the line.« less

  20. A field theoretic generalization of Hajicek and Kuchar's quantization scheme in 3+1 canonical quantum gravity

    NASA Astrophysics Data System (ADS)

    Melas, Evangelos

    2011-07-01

    The 3+1 (canonical) decomposition of all geometries admitting two-dimensional space-like surfaces is exhibited as a generalization of a previous work. A proposal, consisting of a specific re-normalization Assumption and an accompanying Requirement, which has been put forward in the 2+1 case is now generalized to 3+1 dimensions. This enables the canonical quantization of these geometries through a generalization of Kuchař's quantization scheme in the case of infinite degrees of freedom. The resulting Wheeler-deWitt equation is based on a re-normalized manifold parameterized by three smooth scalar functionals. The entire space of solutions to this equation is analytically given, a fact that is entirely new to the present case. This is made possible by exploiting the freedom left by the imposition of the Requirement and contained in the third functional.

  1. Tomlinson-Harashima Precoding for Multiuser MIMO Systems With Quantized CSI Feedback and User Scheduling

    NASA Astrophysics Data System (ADS)

    Sun, Liang; McKay, Matthew R.

    2014-08-01

    This paper studies the sum rate performance of a low complexity quantized CSI-based Tomlinson-Harashima (TH) precoding scheme for downlink multiuser MIMO tansmission, employing greedy user selection. The asymptotic distribution of the output signal to interference plus noise ratio of each selected user and the asymptotic sum rate as the number of users K grows large are derived by using extreme value theory. For fixed finite signal to noise ratios and a finite number of transmit antennas $n_T$, we prove that as K grows large, the proposed approach can achieve the optimal sum rate scaling of the MIMO broadcast channel. We also prove that, if we ignore the precoding loss, the average sum rate of this approach converges to the average sum capacity of the MIMO broadcast channel. Our results provide insights into the effect of multiuser interference caused by quantized CSI on the multiuser diversity gain.

  2. Correlated Light-Matter Interactions in Cavity QED

    NASA Astrophysics Data System (ADS)

    Flick, Johannes; Pellegrini, Camilla; Ruggenthaler, Michael; Appel, Heiko; Tokatly, Ilya; Rubio, Angel

    2015-03-01

    In the last decade, time-dependent density functional theory (TDDFT) has been successfully applied to a large variety of problems, such as calculations of absorption spectra, excitation energies, or dynamics in strong laser fields. Recently, we have generalized TDDFT to also describe electron-photon systems (QED-TDDFT). Here, matter and light are treated on an equal quantized footing. In this work, we present the first numerical calculations in the framework of QED-TDDFT. We show exact solutions for fully quantized prototype systems consisting of atoms or molecules placed in optical high-Q cavities and coupled to quantized electromagnetic modes. We focus on the electron-photon exchange-correlation (xc) contribution by calculating exact Kohn-Sham potentials using fixed-point inversions and present the performance of the first approximated xc-potential based on an optimized effective potential (OEP) approach. Max Planck Institute for the Structure and Dynamics of Matter, Hamburg, and Fritz-Haber-Institut der MPG, Berlin

  3. Wavelet/scalar quantization compression standard for fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class ofmore » potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.« less

  4. Quantum Computing and Second Quantization

    DOE PAGES

    Makaruk, Hanna Ewa

    2017-02-10

    Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.

  5. Quantum Computing and Second Quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makaruk, Hanna Ewa

    Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.

  6. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  7. Pseudo-Kähler Quantization on Flag Manifolds

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    A unified approach to geometric, symbol and deformation quantizations on a generalized flag manifold endowed with an invariant pseudo-Kähler structure is proposed. In particular cases we arrive at Berezin's quantization via covariant and contravariant symbols.

  8. Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi

    A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.

  9. Observation of Landau quantization and standing waves in HfSiS

    NASA Astrophysics Data System (ADS)

    Jiao, L.; Xu, Q. N.; Qi, Y. P.; Wu, S.-C.; Sun, Y.; Felser, C.; Wirth, S.

    2018-05-01

    Recently, HfSiS was found to be a new type of Dirac semimetal with a line of Dirac nodes in the band structure. Meanwhile, Rashba-split surface states are also pronounced in this compound. Here we report a systematic study of HfSiS by scanning tunneling microscopy/spectroscopy at low temperature and high magnetic field. The Rashba-split surface states are characterized by measuring Landau quantization and standing waves, which reveal a quasilinear dispersive band structure. First-principles calculations based on density-functional theory are conducted and compared with the experimental results. Based on these investigations, the properties of the Rashba-split surface states and their interplay with defects and collective modes are discussed.

  10. Two dimensional topological insulator in quantizing magnetic fields

    NASA Astrophysics Data System (ADS)

    Olshanetsky, E. B.; Kvon, Z. D.; Gusev, G. M.; Mikhailov, N. N.; Dvoretsky, S. A.

    2018-05-01

    The effect of quantizing magnetic field on the electron transport is investigated in a two dimensional topological insulator (2D TI) based on a 8 nm (013) HgTe quantum well (QW). The local resistance behavior is indicative of a metal-insulator transition at B ≈ 6 T. On the whole the experimental data agrees with the theory according to which the helical edge states transport in a 2D TI persists from zero up to a critical magnetic field Bc after which a gap opens up in the 2D TI spectrum.

  11. Experimental Studies on a Compact Storage Scheme for Wavelet-based Multiresolution Subregion Retrieval

    NASA Technical Reports Server (NTRS)

    Poulakidas, A.; Srinivasan, A.; Egecioglu, O.; Ibarra, O.; Yang, T.

    1996-01-01

    Wavelet transforms, when combined with quantization and a suitable encoding, can be used to compress images effectively. In order to use them for image library systems, a compact storage scheme for quantized coefficient wavelet data must be developed with a support for fast subregion retrieval. We have designed such a scheme and in this paper we provide experimental studies to demonstrate that it achieves good image compression ratios, while providing a natural indexing mechanism that facilitates fast retrieval of portions of the image at various resolutions.

  12. Instant-Form and Light-Front Quantization of Field Theories

    NASA Astrophysics Data System (ADS)

    Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James

    2018-05-01

    In this work we consider the instant-form and light-front quantization of some field theories. As an example, we consider a class of gauged non-linear sigma models with different regularizations. In particular, we present the path integral quantization of the gauged non-linear sigma model in the Faddeevian regularization. We also make a comparision of the possible differences in the instant-form and light-front quantization at appropriate places.

  13. Quantization improves stabilization of dynamical systems with delayed feedback

    NASA Astrophysics Data System (ADS)

    Stepan, Gabor; Milton, John G.; Insperger, Tamas

    2017-11-01

    We show that an unstable scalar dynamical system with time-delayed feedback can be stabilized by quantizing the feedback. The discrete time model corresponds to a previously unrecognized case of the microchaotic map in which the fixed point is both locally and globally repelling. In the continuous-time model, stabilization by quantization is possible when the fixed point in the absence of feedback is an unstable node, and in the presence of feedback, it is an unstable focus (spiral). The results are illustrated with numerical simulation of the unstable Hayes equation. The solutions of the quantized Hayes equation take the form of oscillations in which the amplitude is a function of the size of the quantization step. If the quantization step is sufficiently small, the amplitude of the oscillations can be small enough to practically approximate the dynamics around a stable fixed point.

  14. On Correspondence of BRST-BFV, Dirac, and Refined Algebraic Quantizations of Constrained Systems

    NASA Astrophysics Data System (ADS)

    Shvedov, O. Yu.

    2002-11-01

    The correspondence between BRST-BFV, Dirac, and refined algebraic (group averaging, projection operator) approaches to quantizing constrained systems is analyzed. For the closed-algebra case, it is shown that the component of the BFV wave function corresponding to maximal (minimal) value of number of ghosts and antighosts in the Schrodinger representation may be viewed as a wave function in the refined algebraic (Dirac) quantization approach. The Giulini-Marolf group averaging formula for the inner product in the refined algebraic quantization approach is obtained from the Batalin-Marnelius prescription for the BRST-BFV inner product, which should be generally modified due to topological problems. The considered prescription for the correspondence of states is observed to be applicable to the open-algebra case. The refined algebraic quantization approach is generalized then to the case of nontrivial structure functions. A simple example is discussed. The correspondence of observables for different quantization methods is also investigated.

  15. Quantization of geometric phase with integer and fractional topological characterization in a quantum Ising chain with long-range interaction.

    PubMed

    Sarkar, Sujit

    2018-04-12

    An attempt is made to study and understand the behavior of quantization of geometric phase of a quantum Ising chain with long range interaction. We show the existence of integer and fractional topological characterization for this model Hamiltonian with different quantization condition and also the different quantized value of geometric phase. The quantum critical lines behave differently from the perspective of topological characterization. The results of duality and its relation to the topological quantization is presented here. The symmetry study for this model Hamiltonian is also presented. Our results indicate that the Zak phase is not the proper physical parameter to describe the topological characterization of system with long range interaction. We also present quite a few exact solutions with physical explanation. Finally we present the relation between duality, symmetry and topological characterization. Our work provides a new perspective on topological quantization.

  16. Spacetime algebra as a powerful tool for electromagnetism

    NASA Astrophysics Data System (ADS)

    Dressel, Justin; Bliokh, Konstantin Y.; Nori, Franco

    2015-08-01

    We present a comprehensive introduction to spacetime algebra that emphasizes its practicality and power as a tool for the study of electromagnetism. We carefully develop this natural (Clifford) algebra of the Minkowski spacetime geometry, with a particular focus on its intrinsic (and often overlooked) complex structure. Notably, the scalar imaginary that appears throughout the electromagnetic theory properly corresponds to the unit 4-volume of spacetime itself, and thus has physical meaning. The electric and magnetic fields are combined into a single complex and frame-independent bivector field, which generalizes the Riemann-Silberstein complex vector that has recently resurfaced in studies of the single photon wavefunction. The complex structure of spacetime also underpins the emergence of electromagnetic waves, circular polarizations, the normal variables for canonical quantization, the distinction between electric and magnetic charge, complex spinor representations of Lorentz transformations, and the dual (electric-magnetic field exchange) symmetry that produces helicity conservation in vacuum fields. This latter symmetry manifests as an arbitrary global phase of the complex field, motivating the use of a complex vector potential, along with an associated transverse and gauge-invariant bivector potential, as well as complex (bivector and scalar) Hertz potentials. Our detailed treatment aims to encourage the use of spacetime algebra as a readily available and mature extension to existing vector calculus and tensor methods that can greatly simplify the analysis of fundamentally relativistic objects like the electromagnetic field.

  17. Luminescence studies of HgCdTe- and InAsSb-based quantum-well structures

    NASA Astrophysics Data System (ADS)

    Izhnin, I. I.; Izhnin, A. I.; Fitsych, O. I.; Voitsekhovskii, A. V.; Gorn, D. I.; Semakova, A. A.; Bazhenov, N. L.; Mynbaev, K. D.; Zegrya, G. G.

    2018-04-01

    Results of photoluminescence studies of single-quantum-well HgCdTe-based structures and electroluminescence studies of multiple-quantum-well InAsSb-based structures are reported. HgCdTe structures were grown with molecular beam epitaxy on GaAs substrates. InAsSb-based structures were grown with metal-organic chemical vapor deposition on InAs substrates. The common feature of luminescence spectra of all the structures was the presence of peaks with the energy much larger than that of calculated optical transitions between the first quantization levels for electrons and heavy holes. Possibility of observation of optical transitions between the quantization levels of electrons and first and/or second heavy and light hole levels is discussed in the paper in relation to the specifics of the electronic structure of the materials under consideration.

  18. Low-Bit Rate Feedback Strategies for Iterative IA-Precoded MIMO-OFDM-Based Systems

    PubMed Central

    Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio

    2014-01-01

    Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge. PMID:24678274

  19. Low-bit rate feedback strategies for iterative IA-precoded MIMO-OFDM-based systems.

    PubMed

    Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio

    2014-01-01

    Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge.

  20. Noncommutative gerbes and deformation quantization

    NASA Astrophysics Data System (ADS)

    Aschieri, Paolo; Baković, Igor; Jurčo, Branislav; Schupp, Peter

    2010-11-01

    We define noncommutative gerbes using the language of star products. Quantized twisted Poisson structures are discussed as an explicit realization in the sense of deformation quantization. Our motivation is the noncommutative description of D-branes in the presence of topologically non-trivial background fields.

  1. Quantized discrete space oscillators

    NASA Technical Reports Server (NTRS)

    Uzes, C. A.; Kapuscik, Edward

    1993-01-01

    A quasi-canonical sequence of finite dimensional quantizations was found which has canonical quantization as its limit. In order to demonstrate its practical utility and its numerical convergence, this formalism is applied to the eigenvalue and 'eigenfunction' problem of several harmonic and anharmonic oscillators.

  2. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  3. Can chaos be observed in quantum gravity?

    NASA Astrophysics Data System (ADS)

    Dittrich, Bianca; Höhn, Philipp A.; Koslowski, Tim A.; Nelson, Mike I.

    2017-06-01

    Full general relativity is almost certainly 'chaotic'. We argue that this entails a notion of non-integrability: a generic general relativistic model, at least when coupled to cosmologically interesting matter, likely possesses neither differentiable Dirac observables nor a reduced phase space. It follows that the standard notion of observable has to be extended to include non-differentiable or even discontinuous generalized observables. These cannot carry Poisson-algebraic structures and do not admit a standard quantization; one thus faces a quantum representation problem of gravitational observables. This has deep consequences for a quantum theory of gravity, which we investigate in a simple model for a system with Hamiltonian constraint that fails to be completely integrable. We show that basing the quantization on standard topology precludes a semiclassical limit and can even prohibit any solutions to the quantum constraints. Our proposed solution to this problem is to refine topology such that a complete set of Dirac observables becomes continuous. In the toy model, it turns out that a refinement to a polymer-type topology, as e.g. used in loop gravity, is sufficient. Basing quantization of the toy model on this finer topology, we find a complete set of quantum Dirac observables and a suitable semiclassical limit. This strategy is applicable to realistic candidate theories of quantum gravity and thereby suggests a solution to a long-standing problem which implies ramifications for the very concept of quantization. Our work reveals a qualitatively novel facet of chaos in physics and opens up a new avenue of research on chaos in gravity which hints at deep insights into the structure of quantum gravity.

  4. Thermal field theory and generalized light front quantization

    NASA Astrophysics Data System (ADS)

    Weldon, H. Arthur

    2003-04-01

    The dependence of thermal field theory on the surface of quantization and on the velocity of the heat bath is investigated by working in general coordinates that are arbitrary linear combinations of the Minkowski coordinates. In the general coordinates the metric tensor gμν¯ is nondiagonal. The Kubo-Martin-Schwinger condition requires periodicity in thermal correlation functions when the temporal variable changes by an amount -i/(T(g00¯)). Light-front quantization fails since g00¯=0; however, various related quantizations are possible.

  5. Generalized radiation-field quantization method and the Petermann excess-noise factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Y.-J.; Siegman, A.E.; E.L. Ginzton Laboratory, Stanford University, Stanford, California 94305

    2003-10-01

    We propose a generalized radiation-field quantization formalism, where quantization does not have to be referenced to a set of power-orthogonal eigenmodes as conventionally required. This formalism can be used to directly quantize the true system eigenmodes, which can be non-power-orthogonal due to the open nature of the system or the gain/loss medium involved in the system. We apply this generalized field quantization to the laser linewidth problem, in particular, lasers with non-power-orthogonal oscillation modes, and derive the excess-noise factor in a fully quantum-mechanical framework. We also show that, despite the excess-noise factor for oscillating modes, the total spatially averaged decaymore » rate for the laser atoms remains unchanged.« less

  6. Simultaneous fault detection and control design for switched systems with two quantized signals.

    PubMed

    Li, Jian; Park, Ju H; Ye, Dan

    2017-01-01

    The problem of simultaneous fault detection and control design for switched systems with two quantized signals is presented in this paper. Dynamic quantizers are employed, respectively, before the output is passed to fault detector, and before the control input is transmitted to the switched system. Taking the quantized errors into account, the robust performance for this kind of system is given. Furthermore, sufficient conditions for the existence of fault detector/controller are presented in the framework of linear matrix inequalities, and fault detector/controller gains and the supremum of quantizer range are derived by a convex optimized method. Finally, two illustrative examples demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  7. BFV approach to geometric quantization

    NASA Astrophysics Data System (ADS)

    Fradkin, E. S.; Linetsky, V. Ya.

    1994-12-01

    A gauge-invariant approach to geometric quantization is developed. It yields a complete quantum description for dynamical systems with non-trivial geometry and topology of the phase space. The method is a global version of the gauge-invariant approach to quantization of second-class constraints developed by Batalin, Fradkin and Fradkina (BFF). Physical quantum states and quantum observables are respectively described by covariantly constant sections of the Fock bundle and the bundle of hermitian operators over the phase space with a flat connection defined by the nilpotent BVF-BRST operator. Perturbative calculation of the first non-trivial quantum correction to the Poisson brackets leads to the Chevalley cocycle known in deformation quantization. Consistency conditions lead to a topological quantization condition with metaplectic anomaly.

  8. Deformation quantization of fermi fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galaviz, I.; Garcia-Compean, H.; Departamento de Fisica, Centro de Investigacion y de Estudios Avanzados del IPN, P.O. Box 14-740, 07000 Mexico, D.F.

    2008-04-15

    Deformation quantization for any Grassmann scalar free field is described via the Weyl-Wigner-Moyal formalism. The Stratonovich-Weyl quantizer, the Moyal *-product and the Wigner functional are obtained by extending the formalism proposed recently in [I. Galaviz, H. Garcia-Compean, M. Przanowski, F.J. Turrubiates, Weyl-Wigner-Moyal Formalism for Fermi Classical Systems, arXiv:hep-th/0612245] to the fermionic systems of infinite number of degrees of freedom. In particular, this formalism is applied to quantize the Dirac free field. It is observed that the use of suitable oscillator variables facilitates considerably the procedure. The Stratonovich-Weyl quantizer, the Moyal *-product, the Wigner functional, the normal ordering operator, and finally,more » the Dirac propagator have been found with the use of these variables.« less

  9. Polymer-Fourier quantization of the scalar field revisited

    NASA Astrophysics Data System (ADS)

    Garcia-Chung, Angel; Vergara, J. David

    2016-10-01

    The polymer quantization of the Fourier modes of the real scalar field is studied within algebraic scheme. We replace the positive linear functional of the standard Poincaré invariant quantization by a singular one. This singular positive linear functional is constructed as mimicking the singular limit of the complex structure of the Poincaré invariant Fock quantization. The resulting symmetry group of such polymer quantization is the subgroup SDiff(ℝ4) which is a subgroup of Diff(ℝ4) formed by spatial volume preserving diffeomorphisms. In consequence, this yields an entirely different irreducible representation of the canonical commutation relations, nonunitary equivalent to the standard Fock representation. We also compared the Poincaré invariant Fock vacuum with the polymer Fourier vacuum.

  10. Minimizing embedding impact in steganography using trellis-coded quantization

    NASA Astrophysics Data System (ADS)

    Filler, Tomáš; Judas, Jan; Fridrich, Jessica

    2010-01-01

    In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.

  11. Instabilities caused by floating-point arithmetic quantization.

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1972-01-01

    It is shown that an otherwise stable digital control system can be made unstable by signal quantization when the controller operates on floating-point arithmetic. Sufficient conditions of instability are determined, and an example of loss of stability is treated when only one quantizer is operated.

  12. Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.

    PubMed

    Hu, Liang; Wang, Zidong; Liu, Xiaohui

    2016-08-01

    In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.

  13. Canonical quantization of classical mechanics in curvilinear coordinates. Invariant quantization procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Błaszak, Maciej, E-mail: blaszakm@amu.edu.pl; Domański, Ziemowit, E-mail: ziemowit@amu.edu.pl

    In the paper is presented an invariant quantization procedure of classical mechanics on the phase space over flat configuration space. Then, the passage to an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. An explicit form of position and momentum operators as well as their appropriate ordering in arbitrary curvilinear coordinates is demonstrated. Finally, the extension of presented formalism onto non-flat case and related ambiguities of the process of quantization are discussed. -- Highlights: •An invariant quantization procedure of classical mechanics on the phase space over flat configuration space is presented. •The passage tomore » an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. •Explicit form of position and momentum operators and their appropriate ordering in curvilinear coordinates is shown. •The invariant form of Hamiltonian operators quadratic and cubic in momenta is derived. •The extension of presented formalism onto non-flat case and related ambiguities of the quantization process are discussed.« less

  14. Quantization noise in digital speech. M.S. Thesis- Houston Univ.

    NASA Technical Reports Server (NTRS)

    Schmidt, O. L.

    1972-01-01

    The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.

  15. Efficient Fingercode Classification

    NASA Astrophysics Data System (ADS)

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  16. Heavy and Heavy-Light Mesons in the Covariant Spectator Theory

    NASA Astrophysics Data System (ADS)

    Stadler, Alfred; Leitão, Sofia; Peña, M. T.; Biernat, Elmar P.

    2018-05-01

    The masses and vertex functions of heavy and heavy-light mesons, described as quark-antiquark bound states, are calculated with the Covariant Spectator Theory (CST). We use a kernel with an adjustable mixture of Lorentz scalar, pseudoscalar, and vector linear confining interaction, together with a one-gluon-exchange kernel. A series of fits to the heavy and heavy-light meson spectrum were calculated, and we discuss what conclusions can be drawn from it, especially about the Lorentz structure of the kernel. We also apply the Brodsky-Huang-Lepage prescription to express the CST wave functions for heavy quarkonia in terms of light-front variables. They agree remarkably well with light-front wave functions obtained in the Hamiltonian basis light-front quantization approach, even in excited states.

  17. Coherent state quantization of quaternions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muraleetharan, B., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com; Thirulogasanthar, K., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com

    Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.

  18. A discrete mechanics approach to dislocation dynamics in BCC crystals

    NASA Astrophysics Data System (ADS)

    Ramasubramaniam, A.; Ariza, M. P.; Ortiz, M.

    2007-03-01

    A discrete mechanics approach to modeling the dynamics of dislocations in BCC single crystals is presented. Ideas are borrowed from discrete differential calculus and algebraic topology and suitably adapted to crystal lattices. In particular, the extension of a crystal lattice to a CW complex allows for convenient manipulation of forms and fields defined over the crystal. Dislocations are treated within the theory as energy-minimizing structures that lead to locally lattice-invariant but globally incompatible eigendeformations. The discrete nature of the theory eliminates the need for regularization of the core singularity and inherently allows for dislocation reactions and complicated topological transitions. The quantization of slip to integer multiples of the Burgers' vector leads to a large integer optimization problem. A novel approach to solving this NP-hard problem based on considerations of metastability is proposed. A numerical example that applies the method to study the emanation of dislocation loops from a point source of dilatation in a large BCC crystal is presented. The structure and energetics of BCC screw dislocation cores, as obtained via the present formulation, are also considered and shown to be in good agreement with available atomistic studies. The method thus provides a realistic avenue for mesoscale simulations of dislocation based crystal plasticity with fully atomistic resolution.

  19. Effects of training set selection on pain recognition via facial expressions

    NASA Astrophysics Data System (ADS)

    Shier, Warren A.; Yanushkevich, Svetlana N.

    2016-07-01

    This paper presents an approach to pain expression classification based on Gabor energy filters with Support Vector Machines (SVMs), followed by analyzing the effects of training set variations on the systems classification rate. This approach is tested on the UNBC-McMaster Shoulder Pain Archive, which consists of spontaneous pain images, hand labelled using the Prkachin and Solomon Pain Intensity scale. In this paper, the subjects pain intensity level has been quantized into three disjoint groups: no pain, weak pain and strong pain. The results of experiments show that Gabor energy filters with SVMs provide comparable or better results to previous filter- based pain recognition methods, with precision rates of 74%, 30% and 78% for no pain, weak pain and strong pain, respectively. The study of effects of intra-class skew, or changing the number of images per subject, show that both completely removing and over-representing poor quality subjects in the training set has little effect on the overall accuracy of the system. This result suggests that poor quality subjects could be removed from the training set to save offline training time and that SVM is robust not only to outliers in training data, but also to significant amounts of poor quality data mixed into the training sets.

  20. A novel computer-aided detection system for pulmonary nodule identification in CT images

    NASA Astrophysics Data System (ADS)

    Han, Hao; Li, Lihong; Wang, Huafeng; Zhang, Hao; Moore, William; Liang, Zhengrong

    2014-03-01

    Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel approach for CADe of lung nodules using a two-stage vector quantization (VQ) scheme. The first-stage VQ aims to extract lung from the chest volume, while the second-stage VQ is designed to extract initial nodule candidates (INCs) within the lung volume. Then rule-based expert filtering is employed to prune obvious FPs from INCs, and the commonly-used support vector machine (SVM) classifier is adopted to further reduce the FPs. The proposed system was validated on 100 CT scans randomly selected from the 262 scans that have at least one juxta-pleural nodule annotation in the publicly available database - Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The two-stage VQ only missed 2 out of the 207 nodules at agreement level 1, and the INCs detection for each scan took about 30 seconds in average. Expert filtering reduced FPs more than 18 times, while maintaining a sensitivity of 93.24%. As it is trivial to distinguish INCs attached to pleural wall versus not on wall, we investigated the feasibility of training different SVM classifiers to further reduce FPs from these two kinds of INCs. Experiment results indicated that SVM classification over the entire set of INCs was in favor of, where the optimal operating of our CADe system achieved a sensitivity of 89.4% at a specificity of 86.8%.

  1. Nucleation of Quantized Vortices from Rotating Superfluid Drops

    NASA Technical Reports Server (NTRS)

    Donnelly, Russell J.

    2001-01-01

    The long-term goal of this project is to study the nucleation of quantized vortices in helium II by investigating the behavior of rotating droplets of helium II in a reduced gravity environment. The objective of this ground-based research grant was to develop new experimental techniques to aid in accomplishing that goal. The development of an electrostatic levitator for superfluid helium, described below, and the successful suspension of charged superfluid drops in modest electric fields was the primary focus of this work. Other key technologies of general low temperature use were developed and are also discussed.

  2. Quantization of Spontaneously Broken Gauge Theory Based on the Bft-Bfv Formalism

    NASA Astrophysics Data System (ADS)

    Kim, Yong-Wan; Park, Young-Jai

    We quantize the spontaneously broken Abelian U(1) Higgs model by using the improved BFT and BFV formalisms. We construct the BFT physical fields and obtain the firstclass observables including the Hamiltonian in terms of these fields. We also explicitly show that there are exact form invariances between the second-class and first-class quantities. Then, according to the BFV formalism, we derive the corresponding Lagrangian having U(1) gauge symmetry. We also discuss at the classical level how one easily gets the first-class Lagrangian from the symmetry-broken second-class Lagrangian.

  3. BFV quantization on hermitian symmetric spaces

    NASA Astrophysics Data System (ADS)

    Fradkin, E. S.; Linetsky, V. Ya.

    1995-02-01

    Gauge-invariant BFV approach to geometric quantization is applied to the case of hermitian symmetric spaces G/ H. In particular, gauge invariant quantization on the Lobachevski plane and sphere is carried out. Due to the presence of symmetry, master equations for the first-class constraints, quantum observables and physical quantum states are exactly solvable. BFV-BRST operator defines a flat G-connection in the Fock bundle over G/ H. Physical quantum states are covariantly constant sections with respect to this connection and are shown to coincide with the generalized coherent states for the group G. Vacuum expectation values of the quantum observables commuting with the quantum first-class constraints reduce to the covariant symbols of Berezin. The gauge-invariant approach to quantization on symplectic manifolds synthesizes geometric, deformation and Berezin quantization approaches.

  4. Unique Fock quantization of scalar cosmological perturbations

    NASA Astrophysics Data System (ADS)

    Fernández-Méndez, Mikel; Mena Marugán, Guillermo A.; Olmedo, Javier; Velhinho, José M.

    2012-05-01

    We investigate the ambiguities in the Fock quantization of the scalar perturbations of a Friedmann-Lemaître-Robertson-Walker model with a massive scalar field as matter content. We consider the case of compact spatial sections (thus avoiding infrared divergences), with the topology of a three-sphere. After expanding the perturbations in series of eigenfunctions of the Laplace-Beltrami operator, the Hamiltonian of the system is written up to quadratic order in them. We fix the gauge of the local degrees of freedom in two different ways, reaching in both cases the same qualitative results. A canonical transformation, which includes the scaling of the matter-field perturbations by the scale factor of the geometry, is performed in order to arrive at a convenient formulation of the system. We then study the quantization of these perturbations in the classical background determined by the homogeneous variables. Based on previous work, we introduce a Fock representation for the perturbations in which: (a) the complex structure is invariant under the isometries of the spatial sections and (b) the field dynamics is implemented as a unitary operator. These two properties select not only a unique unitary equivalence class of representations, but also a preferred field description, picking up a canonical pair of field variables among all those that can be obtained by means of a time-dependent scaling of the matter field (completed into a linear canonical transformation). Finally, we present an equivalent quantization constructed in terms of gauge-invariant quantities. We prove that this quantization can be attained by a mode-by-mode time-dependent linear canonical transformation which admits a unitary implementation, so that it is also uniquely determined.

  5. A low complexity, low spur digital IF conversion circuit for high-fidelity GNSS signal playback

    NASA Astrophysics Data System (ADS)

    Su, Fei; Ying, Rendong

    2016-01-01

    A low complexity high efficiency and low spur digital intermediate frequency (IF) conversion circuit is discussed in the paper. This circuit is key element in high-fidelity GNSS signal playback instrument. We analyze the spur performance of a finite state machine (FSM) based numerically controlled oscillators (NCO), by optimization of the control algorithm, a FSM based NCO with 3 quantization stage can achieves 65dB SFDR in the range of the seventh harmonic. Compare with traditional lookup table based NCO design with the same Spurious Free Dynamic Range (SFDR) performance, the logic resource require to implemented the NCO is reduced to 1/3. The proposed design method can be extended to the IF conversion system with good SFDR in the range of higher harmonic components by increasing the quantization stage.

  6. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  7. Sn nanothreads in GaAs: experiment and simulation

    NASA Astrophysics Data System (ADS)

    Semenikhin, I.; Vyurkov, V.; Bugaev, A.; Khabibullin, R.; Ponomarev, D.; Yachmenev, A.; Maltsev, P.; Ryzhii, M.; Otsuji, T.; Ryzhii, V.

    2016-12-01

    The gated GaAs structures like the field-effect transistor with the array of the Sn nanothreads was fabricated via delta-doping of vicinal GaAs surface by Sn atoms with a subsequent regrowth. That results in the formation of the chains of Sn atoms at the terrace edges. Two device models were developed. The quantum model accounts for the quantization of the electron energy spectrum in the self-consistent two-dimensional electric potential, herewith the electron density distribution in nanothread arrays for different gate voltages is calculated. The classical model ignores the quantization and electrons are distributed in space according to 3D density of states and Fermi-Dirac statistics. It turned out that qualitatively both models demonstrate similar behavior, nevertheless, the classical one is in better quantitative agreement with experimental data. Plausibly, the quantization could be ignored because Sn atoms are randomly placed along the thread axis. The terahertz hot-electron bolometers (HEBs) could be based on the structure under consideration.

  8. Segmentation of tumor and edema along with healthy tissues of brain using wavelets and neural networks.

    PubMed

    Demirhan, Ayşe; Toru, Mustafa; Guler, Inan

    2015-07-01

    Robust brain magnetic resonance (MR) segmentation algorithms are critical to analyze tissues and diagnose tumor and edema in a quantitative way. In this study, we present a new tissue segmentation algorithm that segments brain MR images into tumor, edema, white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). The detection of the healthy tissues is performed simultaneously with the diseased tissues because examining the change caused by the spread of tumor and edema on healthy tissues is very important for treatment planning. We used T1, T2, and FLAIR MR images of 20 subjects suffering from glial tumor. We developed an algorithm for stripping the skull before the segmentation process. The segmentation is performed using self-organizing map (SOM) that is trained with unsupervised learning algorithm and fine-tuned with learning vector quantization (LVQ). Unlike other studies, we developed an algorithm for clustering the SOM instead of using an additional network. Input feature vector is constructed with the features obtained from stationary wavelet transform (SWT) coefficients. The results showed that average dice similarity indexes are 91% for WM, 87% for GM, 96% for CSF, 61% for tumor, and 77% for edema.

  9. Spin wave modes in out-of-plane magnetized nanorings

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Tartakovskaya, E. V.; Kakazei, G. N.; Adeyeye, A. O.

    2017-07-01

    We investigated the spin wave modes in flat circular permalloy rings with a canted external bias field using ferromagnetic resonance spectroscopy. The external magnetic field H was large enough to saturate the samples. For θ =0∘ (perpendicular geometry), three distinct resonance peaks were observed experimentally. In the case of the cylindrical symmetry violation due to H inclination from normal to the ring plane (the angle θ of H inclination was varied in the 0∘-6∘ range), the splitting of all initial peaks appeared. The distance between neighbor split peaks increased with the θ increment. Unexpectedly, the biggest splitting was observed for the mode with the smallest radial wave vector. This special feature of splitting behavior is determined by the topology of the ring shape. Developed analytical theory revealed that in perpendicular geometry, each observed peak is a combination of signals from the set of radially quantized spin wave excitation with almost the same radial wave vectors, radial profiles, and frequencies, but with different azimuthal dependencies. This degeneracy is a consequence of circular symmetry of the system and can be removed by H inclination from the normal. Our findings were further supported by micromagnetic simulations.

  10. A Algebraic Approach to the Quantization of Constrained Systems: Finite Dimensional Examples.

    NASA Astrophysics Data System (ADS)

    Tate, Ranjeet Shekhar

    1992-01-01

    General relativity has two features in particular, which make it difficult to apply to it existing schemes for the quantization of constrained systems. First, there is no background structure in the theory, which could be used, e.g., to regularize constraint operators, to identify a "time" or to define an inner product on physical states. Second, in the Ashtekar formulation of general relativity, which is a promising avenue to quantum gravity, the natural variables for quantization are not canonical; and, classically, there are algebraic identities between them. Existing schemes are usually not concerned with such identities. Thus, from the point of view of canonical quantum gravity, it has become imperative to find a framework for quantization which provides a general prescription to find the physical inner product, and is flexible enough to accommodate non -canonical variables. In this dissertation I present an algebraic formulation of the Dirac approach to the quantization of constrained systems. The Dirac quantization program is augmented by a general principle to find the inner product on physical states. Essentially, the Hermiticity conditions on physical operators determine this inner product. I also clarify the role in quantum theory of possible algebraic identities between the elementary variables. I use this approach to quantize various finite dimensional systems. Some of these models test the new aspects of the algebraic framework. Others bear qualitative similarities to general relativity, and may give some insight into the pitfalls lurking in quantum gravity. The previous quantizations of one such model had many surprising features. When this model is quantized using the algebraic program, there is no longer any unexpected behaviour. I also construct the complete quantum theory for a previously unsolved relativistic cosmology. All these models indicate that the algebraic formulation provides powerful new tools for quantization. In (spatially compact) general relativity, the Hamiltonian is constrained to vanish. I present various approaches one can take to obtain an interpretation of the quantum theory of such "dynamically constrained" systems. I apply some of these ideas to the Bianchi I cosmology, and analyze the issue of the initial singularity in quantum theory.

  11. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    PubMed

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  12. Quantization and Superselection Sectors I:. Transformation Group C*-ALGEBRAS

    NASA Astrophysics Data System (ADS)

    Landsman, N. P.

    Quantization is defined as the act of assigning an appropriate C*-algebra { A} to a given configuration space Q, along with a prescription mapping self-adjoint elements of { A} into physically interpretable observables. This procedure is adopted to solve the problem of quantizing a particle moving on a homogeneous locally compact configuration space Q=G/H. Here { A} is chosen to be the transformation group C*-algebra corresponding to the canonical action of G on Q. The structure of these algebras and their representations are examined in some detail. Inequivalent quantizations are identified with inequivalent irreducible representations of the C*-algebra corresponding to the system, hence with its superselection sectors. Introducing the concept of a pre-Hamiltonian, we construct a large class of G-invariant time-evolutions on these algebras, and find the Hamiltonians implementing these time-evolutions in each irreducible representation of { A}. “Topological” terms in the Hamiltonian (or the corresponding action) turn out to be representation-dependent, and are automatically induced by the quantization procedure. Known “topological” charge quantization or periodicity conditions are then identically satisfied as a consequence of the representation theory of { A}.

  13. Improved Autoassociative Neural Networks

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.

  14. Single-user MIMO versus multi-user MIMO in distributed antenna systems with limited feedback

    NASA Astrophysics Data System (ADS)

    Schwarz, Stefan; Heath, Robert W.; Rupp, Markus

    2013-12-01

    This article investigates the performance of cellular networks employing distributed antennas in addition to the central antennas of the base station. Distributed antennas are likely to be implemented using remote radio units, which is enabled by a low latency and high bandwidth dedicated link to the base station. This facilitates coherent transmission from potentially all available antennas at the same time. Such distributed antenna system (DAS) is an effective way to deal with path loss and large-scale fading in cellular systems. DAS can apply precoding across multiple transmission points to implement single-user MIMO (SU-MIMO) and multi-user MIMO (MU-MIMO) transmission. The throughput performance of various SU-MIMO and MU-MIMO transmission strategies is investigated in this article, employing a Long-Term evolution (LTE) standard compliant simulation framework. The previously theoretically established cell-capacity improvement of MU-MIMO in comparison to SU-MIMO in DASs is confirmed under the practical constraints imposed by the LTE standard, even under the assumption of imperfect channel state information (CSI) at the base station. Because practical systems will use quantized feedback, the performance of different CSI feedback algorithms for DASs is investigated. It is shown that significant gains in the CSI quantization accuracy and in the throughput of especially MU-MIMO systems can be achieved with relatively simple quantization codebook constructions that exploit the available temporal correlation and channel gain differences.

  15. Light-cone quantization of two dimensional field theory in the path integral approach

    NASA Astrophysics Data System (ADS)

    Cortés, J. L.; Gamboa, J.

    1999-05-01

    A quantization condition due to the boundary conditions and the compatification of the light cone space-time coordinate x- is identified at the level of the classical equations for the right-handed fermionic field in two dimensions. A detailed analysis of the implications of the implementation of this quantization condition at the quantum level is presented. In the case of the Thirring model one has selection rules on the excitations as a function of the coupling and in the case of the Schwinger model a double integer structure of the vacuum is derived in the light-cone frame. Two different quantized chiral Schwinger models are found, one of them without a θ-vacuum structure. A generalization of the quantization condition to theories with several fermionic fields and to higher dimensions is presented.

  16. Relational symplectic groupoid quantization for constant poisson structures

    NASA Astrophysics Data System (ADS)

    Cattaneo, Alberto S.; Moshayedi, Nima; Wernli, Konstantin

    2017-09-01

    As a detailed application of the BV-BFV formalism for the quantization of field theories on manifolds with boundary, this note describes a quantization of the relational symplectic groupoid for a constant Poisson structure. The presence of mixed boundary conditions and the globalization of results are also addressed. In particular, the paper includes an extension to space-times with boundary of some formal geometry considerations in the BV-BFV formalism, and specifically introduces into the BV-BFV framework a "differential" version of the classical and quantum master equations. The quantization constructed in this paper induces Kontsevich's deformation quantization on the underlying Poisson manifold, i.e., the Moyal product, which is known in full details. This allows focussing on the BV-BFV technology and testing it. For the inexperienced reader, this is also a practical and reasonably simple way to learn it.

  17. Entanglement Dynamics of Linear and Nonlinear Interaction of Two Two-Level Atoms with a Quantized Phase-Damped Field in the Dispersive Regime

    NASA Astrophysics Data System (ADS)

    Tavassoly, M. K.; Daneshmand, R.; Rustaee, N.

    2018-06-01

    In this paper we study the linear and nonlinear (intensity-dependent) interactions of two two-level atoms with a single-mode quantized field far from resonance, while the phase-damping effect is also taken into account. To find the analytical solution of the atom-field state vector corresponding to the considered model, after deducing the effective Hamiltonian we evaluate the time-dependent elements of the density operator using the master equation approach and superoperator method. Consequently, we are able to study the influences of the special nonlinearity function f (n) = √ {n}, the intensity of the initial coherent state field and the phase-damping parameter on the degree of entanglement of the whole system as well as the field and atom. It is shown that in the presence of damping, by passing time, the amount of entanglement of each subsystem with the rest of system, asymptotically reaches to its stationary and maximum value. Also, the nonlinear interaction does not have any effect on the entanglement of one of the atoms with the rest of system, but it changes the amplitude and time period of entanglement oscillations of the field and the other atom. Moreover, this may cause that, the degree of entanglement which may be low (high) at some moments of time becomes high (low) by entering the intensity-dependent function in the atom-field coupling.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barone, Fiorella; Graffi, Sandro

    We consider on L{sup 2}(T{sup 2}) the Schrödinger operator family L{sub ε}:ε∈R with domain and action defined as D(L{sub ε})=H{sup 2}(T{sup 2}), L{sub ε}u=−(1/2)ℏ{sup 2}(α{sub 1}∂{sub φ{sub 1}{sup 2}}+α{sub 2}∂{sub φ{sub 2}{sup 2}})u−iℏ(γ{sub 1}∂{sub φ{sub 1}}+γ{sub 2}∂{sub φ{sub 2}})u+εVu. Here ε∈R, α= (α{sub 1}, α{sub 2}), γ= (γ{sub 1}, γ{sub 2}) are vectors of complex non-real frequencies, and V a pseudodifferential operator of order zero. L{sub ε} represents the Weyl quantization of the Hamiltonian family L{sub ε}(ξ,x)=(1/2)(α{sub 1}ξ{sub 1}{sup 2}+α{sub 2}ξ{sub 2}{sup 2})+γ{sub 1}ξ{sub 1}+γ{sub 2}ξ{sub 2}+εV(ξ,x) defined on the phase space R{sup 2}×T{sup 2}, where V(ξ,x)∈C{sup 2}(R{sup 2}×T{sup 2};R). Wemore » prove the uniform convergence with respect to ℏ∈[0, 1] of the quantum normal form, which reduces to the classical one for ℏ= 0. This result simultaneously entails an exact quantization formula for the quantum spectrum as well as a convergence criterion for the classical Birkhoff normal form generalizing a well known theorem of Cherry.« less

  19. Fractional corresponding operator in quantum mechanics and applications: A uniform fractional Schrödinger equation in form and fractional quantization methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiao; Science and Technology on Electronic Information Control Laboratory, 610036, Chengdu, Sichuan; Wei, Chaozhen

    2014-11-15

    In this paper we use Dirac function to construct a fractional operator called fractional corresponding operator, which is the general form of momentum corresponding operator. Then we give a judging theorem for this operator and with this judging theorem we prove that R–L, G–L, Caputo, Riesz fractional derivative operator and fractional derivative operator based on generalized functions, which are the most popular ones, coincide with the fractional corresponding operator. As a typical application, we use the fractional corresponding operator to construct a new fractional quantization scheme and then derive a uniform fractional Schrödinger equation in form. Additionally, we find thatmore » the five forms of fractional Schrödinger equation belong to the particular cases. As another main result of this paper, we use fractional corresponding operator to generalize fractional quantization scheme by using Lévy path integral and use it to derive the corresponding general form of fractional Schrödinger equation, which consequently proves that these two quantization schemes are equivalent. Meanwhile, relations between the theory in fractional quantum mechanics and that in classic quantum mechanics are also discussed. As a physical example, we consider a particle in an infinite potential well. We give its wave functions and energy spectrums in two ways and find that both results are the same.« less

  20. Splitting Times of Doubly Quantized Vortices in Dilute Bose-Einstein Condensates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huhtamaeki, J. A. M.; Pietilae, V.; Virtanen, S. M. M.

    2006-09-15

    Recently, the splitting of a topologically created doubly quantized vortex into two singly quantized vortices was experimentally investigated in dilute atomic cigar-shaped Bose-Einstein condensates [Y. Shin et al., Phys. Rev. Lett. 93, 160406 (2004)]. In particular, the dependency of the splitting time on the peak particle density was studied. We present results of theoretical simulations which closely mimic the experimental setup. We show that the combination of gravitational sag and time dependency of the trapping potential alone suffices to split the doubly quantized vortex in time scales which are in good agreement with the experiments.

  1. Response of two-band systems to a single-mode quantized field

    NASA Astrophysics Data System (ADS)

    Shi, Z. C.; Shen, H. Z.; Wang, W.; Yi, X. X.

    2016-03-01

    The response of topological insulators (TIs) to an external weakly classical field can be expressed in terms of Kubo formula, which predicts quantized Hall conductivity of the quantum Hall family. The response of TIs to a single-mode quantized field, however, remains unexplored. In this work, we take the quantum nature of the external field into account and define a Hall conductance to characterize the linear response of a two-band system to the quantized field. The theory is then applied to topological insulators. Comparisons with the traditional Hall conductance are presented and discussed.

  2. Quantized Iterative Learning Consensus Tracking of Digital Networks With Limited Information Communication.

    PubMed

    Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie

    2017-06-01

    This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.

  3. Universe creation from the third-quantized vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGuigan, M.

    1989-04-15

    Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.

  4. Universe creation from the third-quantized vacuum

    NASA Astrophysics Data System (ADS)

    McGuigan, Michael

    1989-04-01

    Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.

  5. 4D Sommerfeld quantization of the complex extended charge

    NASA Astrophysics Data System (ADS)

    Bulyzhenkov, Igor E.

    2017-12-01

    Gravitational fields and accelerations cannot change quantized magnetic flux in closed line contours due to flat 3D section of curved 4D space-time-matter. The relativistic Bohr-Sommerfeld quantization of the imaginary charge reveals an electric analog of the Compton length, which can introduce quantitatively the fine structure constant and the Plank length.

  6. Monitoring the Microgravity Environment Quality On-Board the International Space Station Using Soft Computing Techniques

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Lin, Paul P.

    2001-01-01

    This paper presents an artificial intelligence monitoring system developed by the NASA Glenn Principal Investigator Microgravity Services project to help the principal investigator teams identify the primary vibratory disturbance sources that are active, at any moment in time, on-board the International Space Station, which might impact the microgravity environment their experiments are exposed to. From the Principal Investigator Microgravity Services' web site, the principal investigator teams can monitor via a graphical display, in near real time, which event(s) is/are on, such as crew activities, pumps, fans, centrifuges, compressor, crew exercise, platform structural modes, etc., and decide whether or not to run their experiments based on the acceleration environment associated with a specific event. This monitoring system is focused primarily on detecting the vibratory disturbance sources, but could be used as well to detect some of the transient disturbance sources, depending on the events duration. The system has built-in capability to detect both known and unknown vibratory disturbance sources. Several soft computing techniques such as Kohonen's Self-Organizing Feature Map, Learning Vector Quantization, Back-Propagation Neural Networks, and Fuzzy Logic were used to design the system.

  7. Data compression experiments with LANDSAT thematic mapper and Nimbus-7 coastal zone color scanner data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Ramapriyan, H. K.

    1989-01-01

    A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.

  8. Classification of damage in structural systems using time series analysis and supervised and unsupervised pattern recognition techniques

    NASA Astrophysics Data System (ADS)

    Omenzetter, Piotr; de Lautour, Oliver R.

    2010-04-01

    Developed for studying long, periodic records of various measured quantities, time series analysis methods are inherently suited and offer interesting possibilities for Structural Health Monitoring (SHM) applications. However, their use in SHM can still be regarded as an emerging application and deserves more studies. In this research, Autoregressive (AR) models were used to fit experimental acceleration time histories from two experimental structural systems, a 3- storey bookshelf-type laboratory structure and the ASCE Phase II SHM Benchmark Structure, in healthy and several damaged states. The coefficients of the AR models were chosen as damage sensitive features. Preliminary visual inspection of the large, multidimensional sets of AR coefficients to check the presence of clusters corresponding to different damage severities was achieved using Sammon mapping - an efficient nonlinear data compression technique. Systematic classification of damage into states based on the analysis of the AR coefficients was achieved using two supervised classification techniques: Nearest Neighbor Classification (NNC) and Learning Vector Quantization (LVQ), and one unsupervised technique: Self-organizing Maps (SOM). This paper discusses the performance of AR coefficients as damage sensitive features and compares the efficiency of the three classification techniques using experimental data.

  9. Can Selforganizing Maps Accurately Predict Photometric Redshifts?

    NASA Technical Reports Server (NTRS)

    Way, Michael J.; Klose, Christian

    2012-01-01

    We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using delta(z) = z(sub phot) - z(sub spec)) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods

  10. Number-phase entropic squeezing and nonclassical properties of a three-level atom interacting with a two-mode field: intensity-dependent coupling, deformed Kerr medium, and detuning effects

    NASA Astrophysics Data System (ADS)

    Faghihi, Mohammad Javad; Tavassoly, Mohammad Kazem

    2013-11-01

    In this paper, we follow our presented model in J. Opt. Soc. Am. B {\\bf 30}, 1109--1117 (2013), in which the interaction between a $\\Lambda$-type three-level atom and a quantized two-mode radiation field in a cavity in the presence of nonlinearities is studied. After giving a brief review on the procedure of obtaining the state vector of the atom-field system, some further interesting and important physical features (which are of particular interest in the quantum optics field of research) of the whole system state, i.e., the number-phase entropic uncertainty relation (based on the two-mode Pegg-Barnett formalism) and some of the nonclassicality signs consist of sub-Poissonian statistics, Cauchy-Schwartz inequality and two kinds of squeezing phenomenon are investigated. During our presentation, the effects of intensity-dependent coupling, deformed Kerr medium and the detuning parameters on the depth and domain of each of the mentioned nonclassical criteria of the considered quantum system are studied, in detail. It is shown that each of the mentioned nonclassicality aspects can be obtained by appropriately choosing the related parameters.

  11. A variable rate speech compressor for mobile applications

    NASA Technical Reports Server (NTRS)

    Yeldener, S.; Kondoz, A. M.; Evans, B. G.

    1990-01-01

    One of the most promising speech coder at the bit rate of 9.6 to 4.8 kbits/s is CELP. Code Excited Linear Prediction (CELP) has been dominating 9.6 to 4.8 kbits/s region during the past 3 to 4 years. Its set back however, is its expensive implementation. As an alternative to CELP, the Base-Band CELP (CELP-BB) was developed which produced good quality speech comparable to CELP and a single chip implementable complexity as reported previously. Its robustness was also improved to tolerate errors up to 1.0 pct. and maintain intelligibility up to 5.0 pct. and more. Although, CELP-BB produces good quality speech at around 4.8 kbits/s, it has a fundamental problem when updating the pitch filter memory. A sub-optimal solution is proposed for this problem. Below 4.8 kbits/s, however, CELP-BB suffers from noticeable quantization noise as a result of the large vector dimensions used. Efficient representation of speech below 4.8 kbits/s is reported by introducing Sinusoidal Transform Coding (STC) to represent the LPC excitation which is called Sine Wave Excited LPC (SWELP). In this case, natural sounding good quality synthetic speech is obtained at around 2.4 kbits/s.

  12. Hierarchically clustered adaptive quantization CMAC and its learning convergence.

    PubMed

    Teddy, S D; Lai, E M K; Quek, C

    2007-11-01

    The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC), learning convergence, nonuniform quantization.

  13. Using entropy measures to characterize human locomotion.

    PubMed

    Leverick, Graham; Szturm, Tony; Wu, Christine Q

    2014-12-01

    Entropy measures have been widely used to quantify the complexity of theoretical and experimental dynamical systems. In this paper, the value of using entropy measures to characterize human locomotion is demonstrated based on their construct validity, predictive validity in a simple model of human walking and convergent validity in an experimental study. Results show that four of the five considered entropy measures increase meaningfully with the increased probability of falling in a simple passive bipedal walker model. The same four entropy measures also experienced statistically significant increases in response to increasing age and gait impairment caused by cognitive interference in an experimental study. Of the considered entropy measures, the proposed quantized dynamical entropy (QDE) and quantization-based approximation of sample entropy (QASE) offered the best combination of sensitivity to changes in gait dynamics and computational efficiency. Based on these results, entropy appears to be a viable candidate for assessing the stability of human locomotion.

  14. Noise-shaping gradient descent-based online adaptation algorithms for digital calibration of analog circuits.

    PubMed

    Chakrabartty, Shantanu; Shaga, Ravi K; Aono, Kenji

    2013-04-01

    Analog circuits that are calibrated using digital-to-analog converters (DACs) use a digital signal processor-based algorithm for real-time adaptation and programming of system parameters. In this paper, we first show that this conventional framework for adaptation yields suboptimal calibration properties because of artifacts introduced by quantization noise. We then propose a novel online stochastic optimization algorithm called noise-shaping or ΣΔ gradient descent, which can shape the quantization noise out of the frequency regions spanning the parameter adaptation trajectories. As a result, the proposed algorithms demonstrate superior parameter search properties compared to floating-point gradient methods and better convergence properties than conventional quantized gradient-methods. In the second part of this paper, we apply the ΣΔ gradient descent algorithm to two examples of real-time digital calibration: 1) balancing and tracking of bias currents, and 2) frequency calibration of a band-pass Gm-C biquad filter biased in weak inversion. For each of these examples, the circuits have been prototyped in a 0.5-μm complementary metal-oxide-semiconductor process, and we demonstrate that the proposed algorithm is able to find the optimal solution even in the presence of spurious local minima, which are introduced by the nonlinear and non-monotonic response of calibration DACs.

  15. Functional integral for non-Lagrangian systems

    NASA Astrophysics Data System (ADS)

    Kochan, Denis

    2010-02-01

    A functional integral formulation of quantum mechanics for non-Lagrangian systems is presented. The approach, which we call “stringy quantization,” is based solely on classical equations of motion and is free of any ambiguity arising from Lagrangian and/or Hamiltonian formulation of the theory. The functionality of the proposed method is demonstrated on several examples. Special attention is paid to the stringy quantization of systems with a general A-power friction force -κq˙A. Results for A=1 are compared with those obtained in the approaches by Caldirola-Kanai, Bateman, and Kostin. Relations to the Caldeira-Leggett model and to the Feynman-Vernon approach are discussed as well.

  16. The quantization of the chiral Schwinger model based on the BFT-BFV formalism II

    NASA Astrophysics Data System (ADS)

    Park, Mu-In; Park, Young-Jai; Yoon, Sean J.

    1998-12-01

    We apply an improved version of Batalin-Fradkin-Tyutin Hamiltonian method to the a = 1 chiral Schwinger model, which is much more nontrivial than the a>1 one. Furthermore, through the path integral quantization, we newly resolve the problem of the nontrivial 0954-3899/24/12/002/img6-function as well as that of the unwanted Fourier parameter 0954-3899/24/12/002/img7 in the measure. As a result, we explicitly obtain the fully gauge invariant partition function, which includes a new type of Wess-Zumino term irrelevant to the gauge symmetry as well as the usual WZ action.

  17. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  18. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  19. Optical memory based on quantized atomic center-of-mass motion.

    PubMed

    Lopez, J P; de Almeida, A J F; Felinto, D; Tabosa, J W R

    2017-11-01

    We report a new type of optical memory using a pure two-level system of cesium atoms cooled by the magnetically assisted Sisyphus effect. The optical information of a probe field is stored in the coherence between quantized vibrational levels of the atoms in the potential wells of a 1-D optical lattice. The retrieved pulse shows Rabi oscillations with a frequency determined by the reading beam intensity and are qualitatively understood in terms of a simple theoretical model. The exploration of the external degrees of freedom of an atom may add another capability in the design of quantum-information protocols using light.

  20. A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors.

    PubMed

    Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Lu, Xinghai; Xuan, Li

    2009-09-28

    A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors (DLCWFCs) for atmospheric turbulence correction is reported. A simple formula which describes the relationship between pixel number, DLCWFC aperture, quantization level, and atmospheric coherence length was derived based on the calculated atmospheric turbulence wavefronts using Kolmogorov atmospheric turbulence theory. It was found that the pixel number across the DLCWFC aperture is a linear function of the telescope aperture and the quantization level, and it is an exponential function of the atmosphere coherence length. These results are useful for people using DLCWFCs in atmospheric turbulence correction for large-aperture telescopes.

  1. Ellipsoidal fuzzy learning for smart car platoons

    NASA Astrophysics Data System (ADS)

    Dickerson, Julie A.; Kosko, Bart

    1993-12-01

    A neural-fuzzy system combined supervised and unsupervised learning to find and tune the fuzzy-rules. An additive fuzzy system approximates a function by covering its graph with fuzzy rules. A fuzzy rule patch can take the form of an ellipsoid in the input-output space. Unsupervised competitive learning found the statistics of data clusters. The covariance matrix of each synaptic quantization vector defined on ellipsoid centered at the centroid of the data cluster. Tightly clustered data gave smaller ellipsoids or more certain rules. Sparse data gave larger ellipsoids or less certain rules. Supervised learning tuned the ellipsoids to improve the approximation. The supervised neural system used gradient descent to find the ellipsoidal fuzzy patches. It locally minimized the mean-squared error of the fuzzy approximation. Hybrid ellipsoidal learning estimated the control surface for a smart car controller.

  2. Analysis of the Westland Data Set

    NASA Technical Reports Server (NTRS)

    Wen, Fang; Willett, Peter; Deb, Somnath

    2001-01-01

    The "Westland" set of empirical accelerometer helicopter data with seeded and labeled faults is analyzed with the aim of condition monitoring. The autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; and it has also been found that augmentation of these by harmonic and other parameters call improve classification significantly. Several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior oil training data and is thus able to quantify probability of error in all exact manner, such that features may be discarded or coarsened appropriately.

  3. Medical Image Retrieval Using Multi-Texton Assignment.

    PubMed

    Tang, Qiling; Yang, Jirong; Xia, Xianfu

    2018-02-01

    In this paper, we present a multi-texton representation method for medical image retrieval, which utilizes the locality constraint to encode each filter bank response within its local-coordinate system consisting of the k nearest neighbors in texton dictionary and subsequently employs spatial pyramid matching technique to implement feature vector representation. Comparison with the traditional nearest neighbor assignment followed by texton histogram statistics method, our strategies reduce the quantization errors in mapping process and add information about the spatial layout of texton distributions and, thus, increase the descriptive power of the image representation. We investigate the effects of different parameters on system performance in order to choose the appropriate ones for our datasets and carry out experiments on the IRMA-2009 medical collection and the mammographic patch dataset. The extensive experimental results demonstrate that the proposed method has superior performance.

  4. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  5. Immirzi parameter without Immirzi ambiguity: Conformal loop quantization of scalar-tensor gravity

    NASA Astrophysics Data System (ADS)

    Veraguth, Olivier J.; Wang, Charles H.-T.

    2017-10-01

    Conformal loop quantum gravity provides an approach to loop quantization through an underlying conformal structure i.e. conformally equivalent class of metrics. The property that general relativity itself has no conformal invariance is reinstated with a constrained scalar field setting the physical scale. Conformally equivalent metrics have recently been shown to be amenable to loop quantization including matter coupling. It has been suggested that conformal geometry may provide an extended symmetry to allow a reformulated Immirzi parameter necessary for loop quantization to behave like an arbitrary group parameter that requires no further fixing as its present standard form does. Here, we find that this can be naturally realized via conformal frame transformations in scalar-tensor gravity. Such a theory generally incorporates a dynamical scalar gravitational field and reduces to general relativity when the scalar field becomes a pure gauge. In particular, we introduce a conformal Einstein frame in which loop quantization is implemented. We then discuss how different Immirzi parameters under this description may be related by conformal frame transformations and yet share the same quantization having, for example, the same area gaps, modulated by the scalar gravitational field.

  6. Tribology of the lubricant quantized sliding state.

    PubMed

    Castelli, Ivano Eligio; Capozza, Rosario; Vanossi, Andrea; Santoro, Giuseppe E; Manini, Nicola; Tosatti, Erio

    2009-11-07

    In the framework of Langevin dynamics, we demonstrate clear evidence of the peculiar quantized sliding state, previously found in a simple one-dimensional boundary lubricated model [A. Vanossi et al., Phys. Rev. Lett. 97, 056101 (2006)], for a substantially less idealized two-dimensional description of a confined multilayer solid lubricant under shear. This dynamical state, marked by a nontrivial "quantized" ratio of the averaged lubricant center-of-mass velocity to the externally imposed sliding speed, is recovered, and shown to be robust against the effects of thermal fluctuations, quenched disorder in the confining substrates, and over a wide range of loading forces. The lubricant softness, setting the width of the propagating solitonic structures, is found to play a major role in promoting in-registry commensurate regions beneficial to this quantized sliding. By evaluating the force instantaneously exerted on the top plate, we find that this quantized sliding represents a dynamical "pinned" state, characterized by significantly low values of the kinetic friction. While the quantized sliding occurs due to solitons being driven gently, the transition to ordinary unpinned sliding regimes can involve lubricant melting due to large shear-induced Joule heating, for example at large speed.

  7. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  8. Quantized Majorana conductance

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Liu, Chun-Xiao; Gazibegovic, Sasa; Xu, Di; Logan, John A.; Wang, Guanzhong; van Loo, Nick; Bommer, Jouri D. S.; de Moor, Michiel W. A.; Car, Diana; Op Het Veld, Roy L. M.; van Veldhoven, Petrus J.; Koelling, Sebastian; Verheijen, Marcel A.; Pendharkar, Mihir; Pennachio, Daniel J.; Shojaei, Borzoyeh; Lee, Joon Sue; Palmstrøm, Chris J.; Bakkers, Erik P. A. M.; Sarma, S. Das; Kouwenhoven, Leo P.

    2018-04-01

    Majorana zero-modes—a type of localized quasiparticle—hold great promise for topological quantum computing. Tunnelling spectroscopy in electrical transport is the primary tool for identifying the presence of Majorana zero-modes, for instance as a zero-bias peak in differential conductance. The height of the Majorana zero-bias peak is predicted to be quantized at the universal conductance value of 2e2/h at zero temperature (where e is the charge of an electron and h is the Planck constant), as a direct consequence of the famous Majorana symmetry in which a particle is its own antiparticle. The Majorana symmetry protects the quantization against disorder, interactions and variations in the tunnel coupling. Previous experiments, however, have mostly shown zero-bias peaks much smaller than 2e2/h, with a recent observation of a peak height close to 2e2/h. Here we report a quantized conductance plateau at 2e2/h in the zero-bias conductance measured in indium antimonide semiconductor nanowires covered with an aluminium superconducting shell. The height of our zero-bias peak remains constant despite changing parameters such as the magnetic field and tunnel coupling, indicating that it is a quantized conductance plateau. We distinguish this quantized Majorana peak from possible non-Majorana origins by investigating its robustness to electric and magnetic fields as well as its temperature dependence. The observation of a quantized conductance plateau strongly supports the existence of Majorana zero-modes in the system, consequently paving the way for future braiding experiments that could lead to topological quantum computing.

  9. Quantized Majorana conductance.

    PubMed

    Zhang, Hao; Liu, Chun-Xiao; Gazibegovic, Sasa; Xu, Di; Logan, John A; Wang, Guanzhong; van Loo, Nick; Bommer, Jouri D S; de Moor, Michiel W A; Car, Diana; Op Het Veld, Roy L M; van Veldhoven, Petrus J; Koelling, Sebastian; Verheijen, Marcel A; Pendharkar, Mihir; Pennachio, Daniel J; Shojaei, Borzoyeh; Lee, Joon Sue; Palmstrøm, Chris J; Bakkers, Erik P A M; Sarma, S Das; Kouwenhoven, Leo P

    2018-04-05

    Majorana zero-modes-a type of localized quasiparticle-hold great promise for topological quantum computing. Tunnelling spectroscopy in electrical transport is the primary tool for identifying the presence of Majorana zero-modes, for instance as a zero-bias peak in differential conductance. The height of the Majorana zero-bias peak is predicted to be quantized at the universal conductance value of 2e 2 /h at zero temperature (where e is the charge of an electron and h is the Planck constant), as a direct consequence of the famous Majorana symmetry in which a particle is its own antiparticle. The Majorana symmetry protects the quantization against disorder, interactions and variations in the tunnel coupling. Previous experiments, however, have mostly shown zero-bias peaks much smaller than 2e 2 /h, with a recent observation of a peak height close to 2e 2 /h. Here we report a quantized conductance plateau at 2e 2 /h in the zero-bias conductance measured in indium antimonide semiconductor nanowires covered with an aluminium superconducting shell. The height of our zero-bias peak remains constant despite changing parameters such as the magnetic field and tunnel coupling, indicating that it is a quantized conductance plateau. We distinguish this quantized Majorana peak from possible non-Majorana origins by investigating its robustness to electric and magnetic fields as well as its temperature dependence. The observation of a quantized conductance plateau strongly supports the existence of Majorana zero-modes in the system, consequently paving the way for future braiding experiments that could lead to topological quantum computing.

  10. Evaluation of NASA speech encoder

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Techniques developed by NASA for spaceflight instrumentation were used in the design of a quantizer for speech-decoding. Computer simulation of the actions of the quantizer was tested with synthesized and real speech signals. Results were evaluated by a phometician. Topics discussed include the relationship between the number of quantizer levels and the required sampling rate; reconstruction of signals; digital filtering; speech recording, sampling, and storage, and processing results.

  11. Phase noise mitigation of QPSK signal utilizing phase-locked multiplexing of signal harmonics and amplitude saturation.

    PubMed

    Mohajerin-Ariaei, Amirhossein; Ziyadi, Morteza; Chitgarha, Mohammad Reza; Almaiman, Ahmed; Cao, Yinwen; Shamee, Bishara; Yang, Jeng-Yuan; Akasaka, Youichi; Sekiya, Motoyoshi; Takasaka, Shigehiro; Sugizaki, Ryuichi; Touch, Joseph D; Tur, Moshe; Langrock, Carsten; Fejer, Martin M; Willner, Alan E

    2015-07-15

    We demonstrate an all-optical phase noise mitigation scheme based on the generation, delay, and coherent summation of higher order signal harmonics. The signal, its third-order harmonic, and their corresponding delayed variant conjugates create a staircase phase-transfer function that quantizes the phase of quadrature-phase-shift-keying (QPSK) signal to mitigate phase noise. The signal and the harmonics are automatically phase-locked multiplexed, avoiding the need for phase-based feedback loop and injection locking to maintain coherency. The residual phase noise converts to amplitude noise in the quantizer stage, which is suppressed by parametric amplification in the saturation regime. Phase noise reduction of ∼40% and OSNR-gain of ∼3  dB at BER 10(-3) are experimentally demonstrated for 20- and 30-Gbaud QPSK input signals.

  12. Novel Oversampling Technique for Improving Signal-to-Quantization Noise Ratio on Accelerometer-Based Smart Jerk Sensors in CNC Applications.

    PubMed

    Rangel-Magdaleno, Jose J; Romero-Troncoso, Rene J; Osornio-Rios, Roque A; Cabal-Yepez, Eduardo

    2009-01-01

    Jerk monitoring, defined as the first derivative of acceleration, has become a major issue in computerized numeric controlled (CNC) machines. Several works highlight the necessity of measuring jerk in a reliable way for improving production processes. Nowadays, the computation of jerk is done by finite differences of the acceleration signal, computed at the Nyquist rate, which leads to low signal-to-quantization noise ratio (SQNR) during the estimation. The novelty of this work is the development of a smart sensor for jerk monitoring from a standard accelerometer, which has improved SQNR. The proposal is based on oversampling techniques that give a better estimation of jerk than that produced by a Nyquist-rate differentiator. Simulations and experimental results are presented to show the overall methodology performance.

  13. Design and evaluation of sparse quantization index modulation watermarking schemes

    NASA Astrophysics Data System (ADS)

    Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter

    2008-08-01

    In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).

  14. Exploratory research session on the quantization of the gravitational field. At the Institute for Theoretical Physics, Copenhagen, Denmark, June-July 1957

    NASA Astrophysics Data System (ADS)

    DeWitt, Bryce S.

    2017-06-01

    During the period June-July 1957 six physicists met at the Institute for Theoretical Physics of the University of Copenhagen in Denmark to work together on problems connected with the quantization of the gravitational field. A large part of the discussion was devoted to exposition of the individual work of the various participants, but a number of new results were also obtained. The topics investigated by these physicists are outlined in this report and may be grouped under the following main headings: The theory of measurement. Topographical problems in general relativity. Feynman quantization. Canonical quantization. Approximation methods. Special problems.

  15. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  16. Topologies on quantum topoi induced by quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakayama, Kunji

    2013-07-15

    In the present paper, we consider effects of quantization in a topos approach of quantum theory. A quantum system is assumed to be coded in a quantum topos, by which we mean the topos of presheaves on the context category of commutative subalgebras of a von Neumann algebra of bounded operators on a Hilbert space. A classical system is modeled by a Lie algebra of classical observables. It is shown that a quantization map from the classical observables to self-adjoint operators on the Hilbert space naturally induces geometric morphisms from presheaf topoi related to the classical system to the quantummore » topos. By means of the geometric morphisms, we give Lawvere-Tierney topologies on the quantum topos (and their equivalent Grothendieck topologies on the context category). We show that, among them, there exists a canonical one which we call a quantization topology. We furthermore give an explicit expression of a sheafification functor associated with the quantization topology.« less

  17. Bulk-edge correspondence in topological transport and pumping

    NASA Astrophysics Data System (ADS)

    Imura, Ken-Ichiro; Yoshimura, Yukinori; Fukui, Takahiro; Hatsugai, Yasuhiro

    2018-03-01

    The bulk-edge correspondence (BEC) refers to a one-to-one relation between the bulk and edge properties ubiquitous in topologically nontrivial systems. Depending on the setup, BEC manifests in different forms and govern the spectral and transport properties of topological insulators and semimetals. Although the topological pump is theoretically old, BEC in the pump has been established just recently [1] motivated by the state-of-the-art experiments using cold atoms [2, 3]. The center of mass (CM) of a system with boundaries shows a sequence of quantized jumps in the adiabatic limit associated with the edge states. Despite that the bulk is adiabatic, the edge is inevitably non-adiabatic in the experimental setup or in any numerical simulations. Still the pumped charge is quantized and carried by the bulk. Its quantization is guaranteed by a compensation between the bulk and edges. We show that in the presence of disorder the pumped charge continues to be quantized despite the appearance of non-quantized jumps.

  18. 2-Step scalar deadzone quantization for bitplane image coding.

    PubMed

    Auli-Llinas, Francesc

    2013-12-01

    Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.

  19. From black holes to white holes: a quantum gravitational, symmetric bounce

    NASA Astrophysics Data System (ADS)

    Olmedo, Javier; Saini, Sahil; Singh, Parampreet

    2017-11-01

    Recently, a consistent non-perturbative quantization of the Schwarzschild interior resulting in a bounce from black hole to white hole geometry has been obtained by loop quantizing the Kantowski-Sachs vacuum spacetime. As in other spacetimes where the singularity is dominated by the Weyl part of the spacetime curvature, the structure of the singularity is highly anisotropic in the Kantowski-Sachs vacuum spacetime. As a result, the bounce turns out to be in general asymmetric, creating a large mass difference between the parent black hole and the child white hole. In this manuscript, we investigate under what circumstances a symmetric bounce scenario can be constructed in the above quantization. Using the setting of Dirac observables and geometric clocks, we obtain a symmetric bounce condition which can be satisfied by a slight modification in the construction of loops over which holonomies are considered in the quantization procedure. These modifications can be viewed as quantization ambiguities, and are demonstrated in three different flavors, all of which lead to a non-singular black to white hole transition with identical masses. Our results show that quantization ambiguities can mitigate or even qualitatively change some key features of the physics of singularity resolution. Further, these results are potentially helpful in motivating and constructing symmetric black to white hole transition scenarios.

  20. From Weyl to Born-Jordan quantization: The Schrödinger representation revisited

    NASA Astrophysics Data System (ADS)

    de Gosson, Maurice A.

    2016-03-01

    The ordering problem has been one of the long standing and much discussed questions in quantum mechanics from its very beginning. Nowadays, there is more or less a consensus among physicists that the right prescription is Weyl's rule, which is closely related to the Moyal-Wigner phase space formalism. We propose in this report an alternative approach by replacing Weyl quantization with the less well-known Born-Jordan quantization. This choice is actually natural if we want the Heisenberg and Schrödinger pictures of quantum mechanics to be mathematically equivalent. It turns out that, in addition, Born-Jordan quantization can be recovered from Feynman's path integral approach provided that one used short-time propagators arising from correct formulas for the short-time action, as observed by Makri and Miller. These observations lead to a slightly different quantum mechanics, exhibiting some unexpected features, and this without affecting the main existing theory; for instance quantizations of physical Hamiltonian functions are the same as in the Weyl correspondence. The differences are in fact of a more subtle nature; for instance, the quantum observables will not correspond in a one-to-one fashion to classical ones, and the dequantization of a Born-Jordan quantum operator is less straightforward than that of the corresponding Weyl operator. The use of Born-Jordan quantization moreover solves the "angular momentum dilemma", which already puzzled L. Pauling. Born-Jordan quantization has been known for some time (but not fully exploited) by mathematicians working in time-frequency analysis and signal analysis, but ignored by physicists. One of the aims of this report is to collect and synthesize these sporadic discussions, while analyzing the conceptual differences with Weyl quantization, which is also reviewed in detail. Another striking feature is that the Born-Jordan formalism leads to a redefinition of phase space quantum mechanics, where the usual Wigner distribution has to be replaced with a new quasi-distribution reducing interference effects.

  1. Electroencephalographic compression based on modulated filter banks and wavelet transform.

    PubMed

    Bazán-Prieto, Carlos; Cárdenas-Barrera, Julián; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando

    2011-01-01

    Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bestwick, A. J.; Fox, E. J.; Kou, Xufeng

    In this study, we report a nearly ideal quantum anomalous Hall effect in a three-dimensional topological insulator thin film with ferromagnetic doping. Near zero applied magnetic field we measure exact quantization in the Hall resistance to within a part per 10,000 and a longitudinal resistivity under 1 Ω per square, with chiral edge transport explicitly confirmed by nonlocal measurements. Deviations from this behavior are found to be caused by thermally activated carriers, as indicated by an Arrhenius law temperature dependence. Using the deviations as a thermometer, we demonstrate an unexpected magnetocaloric effect and use it to reach near-perfect quantization bymore » cooling the sample below the dilution refrigerator base temperature in a process approximating adiabatic demagnetization refrigeration.« less

  3. Spectral feature extraction of EEG signals and pattern recognition during mental tasks of 2-D cursor movements for BCI using SVM and ANN.

    PubMed

    Bascil, M Serdar; Tesneli, Ahmet Y; Temurtas, Feyzullah

    2016-09-01

    Brain computer interface (BCI) is a new communication way between man and machine. It identifies mental task patterns stored in electroencephalogram (EEG). So, it extracts brain electrical activities recorded by EEG and transforms them machine control commands. The main goal of BCI is to make available assistive environmental devices for paralyzed people such as computers and makes their life easier. This study deals with feature extraction and mental task pattern recognition on 2-D cursor control from EEG as offline analysis approach. The hemispherical power density changes are computed and compared on alpha-beta frequency bands with only mental imagination of cursor movements. First of all, power spectral density (PSD) features of EEG signals are extracted and high dimensional data reduced by principle component analysis (PCA) and independent component analysis (ICA) which are statistical algorithms. In the last stage, all features are classified with two types of support vector machine (SVM) which are linear and least squares (LS-SVM) and three different artificial neural network (ANN) structures which are learning vector quantization (LVQ), multilayer neural network (MLNN) and probabilistic neural network (PNN) and mental task patterns are successfully identified via k-fold cross validation technique.

  4. An analogue of Weyl’s law for quantized irreducible generalized flag manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matassa, Marco, E-mail: marco.matassa@gmail.com, E-mail: mmatassa@math.uio.no

    2015-09-15

    We prove an analogue of Weyl’s law for quantized irreducible generalized flag manifolds. This is formulated in terms of a zeta function which, similarly to the classical setting, satisfies the following two properties: as a functional on the quantized algebra it is proportional to the Haar state and its first singularity coincides with the classical dimension. The relevant formulas are given for the more general case of compact quantum groups.

  5. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    PubMed

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  6. Performance of customized DCT quantization tables on scientific data

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh; Livny, Miron

    1994-01-01

    We show that it is desirable to use data-specific or customized quantization tables for scaling the spatial frequency coefficients obtained using the Discrete Cosine Transform (DCT). DCT is widely used for image and video compression (MP89, PM93) but applications typically use default quantization matrices. Using actual scientific data gathered from divers sources such as spacecrafts and electron-microscopes, we show that the default compression/quality tradeoffs can be significantly improved upon by using customized tables. We also show that significant improvements are possible for the standard test images Lena and Baboon. This work is part of an effort to develop a practical scheme for optimizing quantization matrices for any given image or video stream, under any given quality or compression constraints.

  7. Quantization of Non-Lagrangian Systems

    NASA Astrophysics Data System (ADS)

    Kochan, Denis

    A novel method for quantization of non-Lagrangian (open) systems is proposed. It is argued that the essential object, which provides both classical and quantum evolution, is a certain canonical two-form defined in extended velocity space. In this setting classical dynamics is recovered from the stringy-type variational principle, which employs umbilical surfaces instead of histories of the system. Quantization is then accomplished in accordance with the introduced variational principle. The path integral for the transition probability amplitude (propagator) is rearranged to a surface functional integral. In the standard case of closed (Lagrangian) systems the presented method reduces to the standard Feynman's approach. The inverse problem of the calculus of variation, the problem of quantization ambiguity and the quantum mechanics in the presence of friction are analyzed in detail.

  8. A hybrid video codec based on extended block sizes, recursive integer transforms, improved interpolation, and flexible motion representation

    NASA Astrophysics Data System (ADS)

    Karczewicz, Marta; Chen, Peisong; Joshi, Rajan; Wang, Xianglin; Chien, Wei-Jung; Panchal, Rahul; Coban, Muhammed; Chong, In Suk; Reznik, Yuriy A.

    2011-01-01

    This paper describes video coding technology proposal submitted by Qualcomm Inc. in response to a joint call for proposal (CfP) issued by ITU-T SG16 Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG11 (MPEG) in January 2010. Proposed video codec follows a hybrid coding approach based on temporal prediction, followed by transform, quantization, and entropy coding of the residual. Some of its key features are extended block sizes (up to 64x64), recursive integer transforms, single pass switched interpolation filters with offsets (single pass SIFO), mode dependent directional transform (MDDT) for intra-coding, luma and chroma high precision filtering, geometry motion partitioning, adaptive motion vector resolution. It also incorporates internal bit-depth increase (IBDI), and modified quadtree based adaptive loop filtering (QALF). Simulation results are presented for a variety of bit rates, resolutions and coding configurations to demonstrate the high compression efficiency achieved by the proposed video codec at moderate level of encoding and decoding complexity. For random access hierarchical B configuration (HierB), the proposed video codec achieves an average BD-rate reduction of 30.88c/o compared to the H.264/AVC alpha anchor. For low delay hierarchical P (HierP) configuration, the proposed video codec achieves an average BD-rate reduction of 32.96c/o and 48.57c/o, compared to the H.264/AVC beta and gamma anchors, respectively.

  9. Hydrodynamical model of anisotropic, polarized turbulent superfluids. I: constraints for the fluxes

    NASA Astrophysics Data System (ADS)

    Mongiovì, Maria Stella; Restuccia, Liliana

    2018-02-01

    This work is the first of a series of papers devoted to the study of the influence of the anisotropy and polarization of the tangle of quantized vortex lines in superfluid turbulence. A thermodynamical model of inhomogeneous superfluid turbulence previously formulated is here extended, to take into consideration also these effects. The model chooses as thermodynamic state vector the density, the velocity, the energy density, the heat flux, and a complete vorticity tensor field, including its symmetric traceless part and its antisymmetric part. The relations which constrain the constitutive quantities are deduced from the second principle of thermodynamics using the Liu procedure. The results show that the presence of anisotropy and polarization in the vortex tangle affects in a substantial way the dynamics of the heat flux, and allow us to give a physical interpretation of the vorticity tensor here introduced, and to better describe the internal structure of a turbulent superfluid.

  10. FPGA implementation of self organizing map with digital phase locked loops.

    PubMed

    Hikawa, Hiroomi

    2005-01-01

    The self-organizing map (SOM) has found applicability in a wide range of application areas. Recently new SOM hardware with phase modulated pulse signal and digital phase-locked loops (DPLLs) has been proposed (Hikawa, 2005). The system uses the DPLL as a computing element since the operation of the DPLL is very similar to that of SOM's computation. The system also uses square waveform phase to hold the value of the each input vector element. This paper discuss the hardware implementation of the DPLL SOM architecture. For effective hardware implementation, some components are redesigned to reduce the circuit size. The proposed SOM architecture is described in VHDL and implemented on field programmable gate array (FPGA). Its feasibility is verified by experiments. Results show that the proposed SOM implemented on the FPGA has a good quantization capability, and its circuit size very small.

  11. Land-use Scene Classification in High-Resolution Remote Sensing Images by Multiscale Deeply Described Correlatons

    NASA Astrophysics Data System (ADS)

    Qi, K.; Qingfeng, G.

    2017-12-01

    With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.

  12. New exact solutions of the Dirac and Klein-Gordon equations of a charged particle propagating in a strong laser field in an underdense plasma

    NASA Astrophysics Data System (ADS)

    Varró, Sándor

    2014-03-01

    Exact solutions are presented of the Dirac and Klein-Gordon equations of a charged particle propagating in a classical monochromatic electromagnetic plane wave in a medium of index of refraction nm<1. In the Dirac case the solutions are expressed in terms of new complex polynomials, and in the Klein-Gordon case the found solutions are expressed in terms of Ince polynomials. In each case they form a doubly infinite set, labeled by two integer quantum numbers. These integer numbers represent quantized momentum components of the charged particle along the polarization vector and along the propagation direction of the electromagnetic radiation. Since this radiation may represent a plasmon wave of arbitrary high amplitude, propagating in an underdense plasma, the solutions obtained may have relevance in describing possible quantum features of novel acceleration mechanisms.

  13. Competitive learning with pairwise constraints.

    PubMed

    Covões, Thiago F; Hruschka, Eduardo R; Ghosh, Joydeep

    2013-01-01

    Constrained clustering has been an active research topic since the last decade. Most studies focus on batch-mode algorithms. This brief introduces two algorithms for on-line constrained learning, named on-line linear constrained vector quantization error (O-LCVQE) and constrained rival penalized competitive learning (C-RPCL). The former is a variant of the LCVQE algorithm for on-line settings, whereas the latter is an adaptation of the (on-line) RPCL algorithm to deal with constrained clustering. The accuracy results--in terms of the normalized mutual information (NMI)--from experiments with nine datasets show that the partitions induced by O-LCVQE are competitive with those found by the (batch-mode) LCVQE. Compared with this formidable baseline algorithm, it is surprising that C-RPCL can provide better partitions (in terms of the NMI) for most of the datasets. Also, experiments on a large dataset show that on-line algorithms for constrained clustering can significantly reduce the computational time.

  14. Quantify spatial relations to discover handwritten graphical symbols

    NASA Astrophysics Data System (ADS)

    Li, Jinpeng; Mouchère, Harold; Viard-Gaudin, Christian

    2012-01-01

    To model a handwritten graphical language, spatial relations describe how the strokes are positioned in the 2-dimensional space. Most of existing handwriting recognition systems make use of some predefined spatial relations. However, considering a complex graphical language, it is hard to express manually all the spatial relations. Another possibility would be to use a clustering technique to discover the spatial relations. In this paper, we discuss how to create a relational graph between strokes (nodes) labeled with graphemes in a graphical language. Then we vectorize spatial relations (edges) for clustering and quantization. As the targeted application, we extract the repetitive sub-graphs (graphical symbols) composed of graphemes and learned spatial relations. On two handwriting databases, a simple mathematical expression database and a complex flowchart database, the unsupervised spatial relations outperform the predefined spatial relations. In addition, we visualize the frequent patterns on two text-lines containing Chinese characters.

  15. A novel data hiding scheme for block truncation coding compressed images using dynamic programming strategy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.

    2015-03-01

    Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.

  16. Magnetofermionic condensate in two dimensions

    PubMed Central

    Kulik, L. V.; Zhuravlev, A. S.; Dickmann, S.; Gorbunov, A. V.; Timofeev, V. B.; Kukushkin, I. V.; Schmult, S.

    2016-01-01

    Coherent condensate states of particles obeying either Bose or Fermi statistics are in the focus of interest in modern physics. Here we report on condensation of collective excitations with Bose statistics, cyclotron magnetoexcitons, in a high-mobility two-dimensional electron system in a magnetic field. At low temperatures, the dense non-equilibrium ensemble of long-lived triplet magnetoexcitons exhibits both a drastic reduction in the viscosity and a steep enhancement in the response to the external electromagnetic field. The observed effects are related to formation of a super-absorbing state interacting coherently with the electromagnetic field. Simultaneously, the electrons below the Fermi level form a super-emitting state. The effects are explicable from the viewpoint of a coherent condensate phase in a non-equilibrium system of two-dimensional fermions with a fully quantized energy spectrum. The condensation occurs in the space of vectors of magnetic translations, a property providing a completely new landscape for future physical investigations. PMID:27848969

  17. Neural networks to classify speaker independent isolated words recorded in radio car environments

    NASA Astrophysics Data System (ADS)

    Alippi, C.; Simeoni, M.; Torri, V.

    1993-02-01

    Many applications, in particular the ones requiring nonlinear signal processing, have proved Artificial Neural Networks (ANN's) to be invaluable tools for model free estimation. The classifying abilities of ANN's are addressed by testing their performance in a speaker independent word recognition application. A real world case requiring implementation of compact integrated devices is taken into account: the classification of isolated words in radio car environment. A multispeaker database of isolated words was recorded in different environments. Data were first processed to determinate the boundaries of each word and then to extract speech features, the latter accomplished by using cepstral coefficient representation, log area ratios and filters bank techniques. Multilayered perceptron and adaptive vector quantization neural paradigms were tested to find a reasonable compromise between performances and network simplicity, fundamental requirement for the implementation of compact real time running neural devices.

  18. Quantum entanglement and position-momentum entropic squeezing of a moving Lambda-type three-level atom interacting with a single-mode quantized field with intensity-dependent coupling

    NASA Astrophysics Data System (ADS)

    Faghihi, M. J.; Tavassoly, M. K.

    2013-07-01

    In this paper, we study the interaction between a moving Λ-type three-level atom and a single-mode cavity field in the presence of intensity-dependent atom-field coupling. After obtaining the state vector of the entire system explicitly, we study the nonclassical features of the system such as quantum entanglement, position-momentum entropic squeezing, quadrature squeezing and sub-Poissonian statistics. According to the obtained numerical results we illustrate that the squeezed period, the duration of entropy squeezing and the maximal squeezing can be controlled by choosing the appropriate nonlinearity function together with entering the atomic motion effect by the suitable selection of the field-mode structure parameter. Also, the atomic motion, as well as the nonlinearity function, leads to the oscillatory behaviour of the degree of entanglement between the atom and field.

  19. Limited Rank Matrix Learning, discriminative dimension reduction and visualization.

    PubMed

    Bunte, Kerstin; Schneider, Petra; Hammer, Barbara; Schleif, Frank-Michael; Villmann, Thomas; Biehl, Michael

    2012-02-01

    We present an extension of the recently introduced Generalized Matrix Learning Vector Quantization algorithm. In the original scheme, adaptive square matrices of relevance factors parameterize a discriminative distance measure. We extend the scheme to matrices of limited rank corresponding to low-dimensional representations of the data. This allows to incorporate prior knowledge of the intrinsic dimension and to reduce the number of adaptive parameters efficiently. In particular, for very large dimensional data, the limitation of the rank can reduce computation time and memory requirements significantly. Furthermore, two- or three-dimensional representations constitute an efficient visualization method for labeled data sets. The identification of a suitable projection is not treated as a pre-processing step but as an integral part of the supervised training. Several real world data sets serve as an illustration and demonstrate the usefulness of the suggested method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Classifying brain metastases by their primary site of origin using a radiomics approach based on texture analysis: a feasibility study.

    PubMed

    Ortiz-Ramón, Rafael; Larroza, Andrés; Ruiz-España, Silvia; Arana, Estanislao; Moratal, David

    2018-05-14

    To examine the capability of MRI texture analysis to differentiate the primary site of origin of brain metastases following a radiomics approach. Sixty-seven untreated brain metastases (BM) were found in 3D T1-weighted MRI of 38 patients with cancer: 27 from lung cancer, 23 from melanoma and 17 from breast cancer. These lesions were segmented in 2D and 3D to compare the discriminative power of 2D and 3D texture features. The images were quantized using different number of gray-levels to test the influence of quantization. Forty-three rotation-invariant texture features were examined. Feature selection and random forest classification were implemented within a nested cross-validation structure. Classification was evaluated with the area under receiver operating characteristic curve (AUC) considering two strategies: multiclass and one-versus-one. In the multiclass approach, 3D texture features were more discriminative than 2D features. The best results were achieved for images quantized with 32 gray-levels (AUC = 0.873 ± 0.064) using the top four features provided by the feature selection method based on the p-value. In the one-versus-one approach, high accuracy was obtained when differentiating lung cancer BM from breast cancer BM (four features, AUC = 0.963 ± 0.054) and melanoma BM (eight features, AUC = 0.936 ± 0.070) using the optimal dataset (3D features, 32 gray-levels). Classification of breast cancer and melanoma BM was unsatisfactory (AUC = 0.607 ± 0.180). Volumetric MRI texture features can be useful to differentiate brain metastases from different primary cancers after quantizing the images with the proper number of gray-levels. • Texture analysis is a promising source of biomarkers for classifying brain neoplasms. • MRI texture features of brain metastases could help identifying the primary cancer. • Volumetric texture features are more discriminative than traditional 2D texture features.

  1. Optical angular momentum and atoms

    PubMed Central

    2017-01-01

    Any coherent interaction of light and atoms needs to conserve energy, linear momentum and angular momentum. What happens to an atom’s angular momentum if it encounters light that carries orbital angular momentum (OAM)? This is a particularly intriguing question as the angular momentum of atoms is quantized, incorporating the intrinsic spin angular momentum of the individual electrons as well as the OAM associated with their spatial distribution. In addition, a mechanical angular momentum can arise from the rotation of the entire atom, which for very cold atoms is also quantized. Atoms therefore allow us to probe and access the quantum properties of light’s OAM, aiding our fundamental understanding of light–matter interactions, and moreover, allowing us to construct OAM-based applications, including quantum memories, frequency converters for shaped light and OAM-based sensors. This article is part of the themed issue ‘Optical orbital angular momentum’. PMID:28069766

  2. Quantum spaces, central extensions of Lie groups and related quantum field theories

    NASA Astrophysics Data System (ADS)

    Poulain, Timothé; Wallet, Jean-Christophe

    2018-02-01

    Quantum spaces with su(2) noncommutativity can be modelled by using a family of SO(3)-equivariant differential *-representations. The quantization maps are determined from the combination of the Wigner theorem for SU(2) with the polar decomposition of the quantized plane waves. A tracial star-product, equivalent to the Kontsevich product for the Poisson manifold dual to su(2) is obtained from a subfamily of differential *-representations. Noncommutative (scalar) field theories free from UV/IR mixing and whose commutative limit coincides with the usual ϕ 4 theory on ℛ3 are presented. A generalization of the construction to semi-simple possibly non simply connected Lie groups based on their central extensions by suitable abelian Lie groups is discussed. Based on a talk presented by Poulain T at the XXVth International Conference on Integrable Systems and Quantum symmetries (ISQS-25), Prague, June 6-10 2017.

  3. An effective biometric discretization approach to extract highly discriminative, informative, and privacy-protective binary representation

    NASA Astrophysics Data System (ADS)

    Lim, Meng-Hui; Teoh, Andrew Beng Jin

    2011-12-01

    Biometric discretization derives a binary string for each user based on an ordered set of biometric features. This representative string ought to be discriminative, informative, and privacy protective when it is employed as a cryptographic key in various security applications upon error correction. However, it is commonly believed that satisfying the first and the second criteria simultaneously is not feasible, and a tradeoff between them is always definite. In this article, we propose an effective fixed bit allocation-based discretization approach which involves discriminative feature extraction, discriminative feature selection, unsupervised quantization (quantization that does not utilize class information), and linearly separable subcode (LSSC)-based encoding to fulfill all the ideal properties of a binary representation extracted for cryptographic applications. In addition, we examine a number of discriminative feature-selection measures for discretization and identify the proper way of setting an important feature-selection parameter. Encouraging experimental results vindicate the feasibility of our approach.

  4. Development of Advanced Technologies for Complete Genomic and Proteomic Characterization of Quantized Human Tumor Cells

    DTIC Science & Technology

    2014-07-01

    establishment of Glioblastoma ( GBM ) cell lines from GBM patient’s tumor samples and quantized cell populations of each of the parental GBM cell lines, we... GBM patients are now well established and from the basis of the molecular characterization of the tumor development and signatures presented by these...analysis of these quantized cell sub populations and have begun to assemble the protein signatures of GBM tumors underpinned by the comprehensive

  5. Differential calculus on quantized simple lie groups

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav

    1991-07-01

    Differential calculi, generalizations of Woronowicz's four-dimensional calculus on SU q (2), are introduced for quantized classical simple Lie groups in a constructive way. For this purpose, the approach of Faddeev and his collaborators to quantum groups was used. An equivalence of Woronowicz's enveloping algebra generated by the dual space to the left-invariant differential forms and the corresponding quantized universal enveloping algebra, is obtained for our differential calculi. Real forms for q ∈ ℝ are also discussed.

  6. Light-hole quantization in the optical response of ultra-wide GaAs/Al(x)Ga(1-x)As quantum wells.

    PubMed

    Solovyev, V V; Bunakov, V A; Schmult, S; Kukushkin, I V

    2013-01-16

    Temperature-dependent reflectivity and photoluminescence spectra are studied for undoped ultra-wide 150 and 250 nm GaAs quantum wells. It is shown that spectral features previously attributed to a size quantization of the exciton motion in the z-direction coincide well with energies of quantized levels for light holes. Furthermore, optical spectra reveal very similar properties at temperatures above the exciton dissociation point.

  7. Deformation quantizations with separation of variables on a Kähler manifold

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    1996-10-01

    We give a simple geometric description of all formal differentiable deformation quantizations on a Kähler manifold M such that for each open subset U⊂ M ⋆-multiplication from the left by a holomorphic function and from the right by an antiholomorphic function on U coincides with the pointwise multiplication by these functions. We show that these quantizations are in 1-1 correspondence with the formal deformations of the original Kähler metrics on M.

  8. Extension of loop quantum gravity to f(R) theories.

    PubMed

    Zhang, Xiangdong; Ma, Yongge

    2011-04-29

    The four-dimensional metric f(R) theories of gravity are cast into connection-dynamical formalism with real su(2) connections as configuration variables. Through this formalism, the classical metric f(R) theories are quantized by extending the loop quantization scheme of general relativity. Our results imply that the nonperturbative quantization procedure of loop quantum gravity is valid not only for general relativity but also for a rather general class of four-dimensional metric theories of gravity.

  9. Foundations of Quantum Mechanics: Derivation of a dissipative Schrödinger equation from first principles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonçalves, L.A.; Olavo, L.S.F., E-mail: olavolsf@gmail.com

    Dissipation in Quantum Mechanics took some time to become a robust field of investigation after the birth of the field. The main issue hindering developments in the field is that the Quantization process was always tightly connected to the Hamiltonian formulation of Classical Mechanics. In this paper we present a quantization process that does not depend upon the Hamiltonian formulation of Classical Mechanics (although still departs from Classical Mechanics) and thus overcome the problem of finding, from first principles, a completely general Schrödinger equation encompassing dissipation. This generalized process of quantization is shown to be nothing but an extension ofmore » a more restricted version that is shown to produce the Schrödinger equation for Hamiltonian systems from first principles (even for Hamiltonian velocity dependent potential). - Highlights: • A Quantization process independent of the Hamiltonian formulation of quantum Mechanics is proposed. • This quantization method is applied to dissipative or absorptive systems. • A Dissipative Schrödinger equation is derived from first principles.« less

  10. Can one ADM quantize relativistic bosonicstrings and membranes?

    NASA Astrophysics Data System (ADS)

    Moncrief, Vincent

    2006-04-01

    The standard methods for quantizing relativistic strings diverge significantly from the Dirac-Wheeler-DeWitt program for quantization of generally covariant systems and one wonders whether the latter could be successfully implemented as an alternative to the former. As a first step in this direction, we consider the possibility of quantizing strings (and also relativistic membranes) via a partially gauge-fixed ADM (Arnowitt, Deser and Misner) formulation of the reduced field equations for these systems. By exploiting some (Euclidean signature) Hamilton-Jacobi techniques that Mike Ryan and I had developed previously for the quantization of Bianchi IX cosmological models, I show how to construct Diff( S 1)-invariant (or Diff(Σ)-invariant in the case of membranes) ground state wave functionals for the cases of co-dimension one strings and membranes embedded in Minkowski spacetime. I also show that the reduced Hamiltonian density operators for these systems weakly commute when applied to physical (i.e. Diff( S 1) or Diff(Σ)-invariant) states. While many open questions remain, these preliminary results seem to encourage further research along the same lines.

  11. Application of heterogeneous pulse coupled neural network in image quantization

    NASA Astrophysics Data System (ADS)

    Huang, Yi; Ma, Yide; Li, Shouliang; Zhan, Kun

    2016-11-01

    On the basis of the different strengths of synaptic connections between actual neurons, this paper proposes a heterogeneous pulse coupled neural network (HPCNN) algorithm to perform quantization on images. HPCNNs are developed from traditional pulse coupled neural network (PCNN) models, which have different parameters corresponding to different image regions. This allows pixels of different gray levels to be classified broadly into two categories: background regional and object regional. Moreover, an HPCNN also satisfies human visual characteristics. The parameters of the HPCNN model are calculated automatically according to these categories, and quantized results will be optimal and more suitable for humans to observe. At the same time, the experimental results of natural images from the standard image library show the validity and efficiency of our proposed quantization method.

  12. Nonperturbative light-front Hamiltonian methods

    NASA Astrophysics Data System (ADS)

    Hiller, J. R.

    2016-09-01

    We examine the current state-of-the-art in nonperturbative calculations done with Hamiltonians constructed in light-front quantization of various field theories. The language of light-front quantization is introduced, and important (numerical) techniques, such as Pauli-Villars regularization, discrete light-cone quantization, basis light-front quantization, the light-front coupled-cluster method, the renormalization group procedure for effective particles, sector-dependent renormalization, and the Lanczos diagonalization method, are surveyed. Specific applications are discussed for quenched scalar Yukawa theory, ϕ4 theory, ordinary Yukawa theory, supersymmetric Yang-Mills theory, quantum electrodynamics, and quantum chromodynamics. The content should serve as an introduction to these methods for anyone interested in doing such calculations and as a rallying point for those who wish to solve quantum chromodynamics in terms of wave functions rather than random samplings of Euclidean field configurations.

  13. Investigation of valley-resolved transmission through gate defined graphene carrier guiders

    NASA Astrophysics Data System (ADS)

    Cao, Shi-Min; Zhou, Jiao-Jiao; Wei, Xuan; Cheng, Shu-Guang

    2017-04-01

    Massless charge carriers in gate potentials modulate graphene quantum well transport in the same way that a electromagnetic wave propagates in optical fibers. A recent experiment by Kim et al (2016 Nat. Phys. 12 1022) reports valley symmetry preserved transport in a graphene carrier guider. Based on a tight-binding model, the valley-resolved transport coefficients are calculated with the method of scattering matrix theory. For a straight potential well, valley-resolved conductance is quantized with a value of 2n  +  1 and multiplied by 2e 2/h with integer n. In the absence of disorder, intervalley scattering, only occurring at both ends of the potential well, is weak. The propagating modes inside the potential well are analyzed with the help of band structure and wave function distribution. The conductance is better preserved for a longer carrier guider. The quantized conductance is barely affected by the boundaries of different types or slightly changing the orientation of the carrier guider. For a curved model, the state with momentum closes to the neutral point is more fragile to boundary scattering and the quantized conductance is ruined as well.

  14. Quantization Of Temperature

    NASA Astrophysics Data System (ADS)

    O'Brien, Paul

    2017-01-01

    Max Plank did not quantize temperature. I will show that the Plank temperature violates the Plank scale. Plank stated that the Plank scale was Natures scale and independent of human construct. Also stating that even aliens would derive the same values. He made a huge mistake, because temperature is based on the Kelvin scale, which is man-made just like the meter and kilogram. He did not discover natures scale for the quantization of temperature. His formula is flawed, and his value is incorrect. Plank's calculation is Tp = c2Mp/Kb. The general form of this equation is T = E/Kb Why is this wrong? The temperature for a fixed amount of energy is dependent upon the volume it occupies. Using the correct formula involves specifying the radius of the volume in the form of (RE). This leads to an inequality and a limit that is equivalent to the Bekenstein Bound, but using temperature instead of entropy. Rewriting this equation as a limit defines both the maximum temperature and Boltzmann's constant. This will saturate any space-time boundary with maximum temperature and information density, also the minimum radius and entropy. The general form of the equation then becomes a limit in BH thermodynamics T <= (RE)/(λKb) .

  15. A Variant of the Mukai Pairing via Deformation Quantization

    NASA Astrophysics Data System (ADS)

    Ramadoss, Ajay C.

    2012-06-01

    Let X be a smooth projective complex variety. The Hochschild homology HH•( X) of X is an important invariant of X, which is isomorphic to the Hodge cohomology of X via the Hochschild-Kostant-Rosenberg isomorphism. On HH•( X), one has the Mukai pairing constructed by Caldararu. An explicit formula for the Mukai pairing at the level of Hodge cohomology was proven by the author in an earlier work (following ideas of Markarian). This formula implies a similar explicit formula for a closely related variant of the Mukai pairing on HH•( X). The latter pairing on HH•( X) is intimately linked to the study of Fourier-Mukai transforms of complex projective varieties. We give a new method to prove a formula computing the aforementioned variant of Caldararu's Mukai pairing. Our method is based on some important results in the area of deformation quantization. In particular, we use part of the work of Kashiwara and Schapira on Deformation Quantization modules together with an algebraic index theorem of Bressler, Nest and Tsygan. Our new method explicitly shows that the "Noncommutative Riemann-Roch" implies the classical Riemann-Roch. Further, it is hoped that our method would be useful for generalization to settings involving certain singular varieties.

  16. Security Enhancement of Wireless Sensor Networks Using Signal Intervals

    PubMed Central

    Moon, Jaegeun; Jung, Im Y.; Yoo, Jaesoo

    2017-01-01

    Various wireless technologies, such as RF, Bluetooth, and Zigbee, have been applied to sensor communications. However, the applications of Bluetooth-based wireless sensor networks (WSN) have a security issue. In one pairing process during Bluetooth communication, which is known as simple secure pairing (SSP), the devices are required to specify I/O capability or user interference to prevent man-in-the-middle (MITM) attacks. This study proposes an enhanced SSP in which a nonce to be transferred is converted to a corresponding signal interval. The quantization level, which is used to interpret physical signal intervals, is renewed at every connection by the transferred nonce and applied to the next nonce exchange so that the same signal intervals can represent different numbers. Even if attackers eavesdrop on the signals, they cannot understand what is being transferred because they cannot determine the quantization level. Furthermore, the proposed model does not require exchanging passkeys as data, and the devices are secure in the case of using a fixed PIN. Subsequently, the new quantization level is calculated automatically whenever the same devices attempt to connect with each other. Therefore, the pairing process can be protected from MITM attacks and be convenient for users. PMID:28368341

  17. Security Enhancement of Wireless Sensor Networks Using Signal Intervals.

    PubMed

    Moon, Jaegeun; Jung, Im Y; Yoo, Jaesoo

    2017-04-02

    Various wireless technologies, such as RF, Bluetooth, and Zigbee, have been applied to sensor communications. However, the applications of Bluetooth-based wireless sensor networks (WSN) have a security issue. In one pairing process during Bluetooth communication, which is known as simple secure pairing (SSP), the devices are required to specify I/O capability or user interference to prevent man-in-the-middle (MITM) attacks. This study proposes an enhanced SSP in which a nonce to be transferred is converted to a corresponding signal interval. The quantization level, which is used to interpret physical signal intervals, is renewed at every connection by the transferred nonce and applied to the next nonce exchange so that the same signal intervals can represent different numbers. Even if attackers eavesdrop on the signals, they cannot understand what is being transferred because they cannot determine the quantization level. Furthermore, the proposed model does not require exchanging passkeys as data, and the devices are secure in the case of using a fixed PIN. Subsequently, the new quantization level is calculated automatically whenever the same devices attempt to connect with each other. Therefore, the pairing process can be protected from MITM attacks and be convenient for users.

  18. Table look-up estimation of signal and noise parameters from quantized observables

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1986-01-01

    A table look-up algorithm for estimating underlying signal and noise parameters from quantized observables is examined. A general mathematical model is developed, and a look-up table designed specifically for estimating parameters from four-bit quantized data is described. Estimator performance is evaluated both analytically and by means of numerical simulation, and an example is provided to illustrate the use of the look-up table for estimating signal-to-noise ratios commonly encountered in Voyager-type data.

  19. Digital television system design study

    NASA Technical Reports Server (NTRS)

    Huth, G. K.

    1976-01-01

    The use of digital techniques for transmission of pictorial data is discussed for multi-frame images (television). Video signals are processed in a manner which includes quantization and coding such that they are separable from the noise introduced into the channel. The performance of digital television systems is determined by the nature of the processing techniques (i.e., whether the video signal itself or, instead, something related to the video signal is quantized and coded) and to the quantization and coding schemes employed.

  20. Investigation of Coding Techniques for Memory and Delay Efficient Interleaving in Slow Rayleigh Fading

    DTIC Science & Technology

    1991-11-01

    2 mega joule/m 2 (MJ/m 2 ) curie 3.700000 x E +1 *giga becquerel (GBq) degree (angle) 1.745329 x E -2 radian (rad) degree Farenheit tK = (tp...quantization assigned two quantization values. One value was assigned for demodulation values that was larger than zero and another quantization value to...demodulation values that were smaller than zero (for maximum-likelihood decisions). Logic 0 was assigned for a positive demodulation value and a logic 1 was

  1. Gold nanoparticles produced in situ mediate bioelectricity and hydrogen production in a microbial fuel cell by quantized capacitance charging.

    PubMed

    Kalathil, Shafeer; Lee, Jintae; Cho, Moo Hwan

    2013-02-01

    Oppan quantized style: By adding a gold precursor at its cathode, a microbial fuel cell (MFC) is demonstrated to form gold nanoparticles that can be used to simultaneously produce bioelectricity and hydrogen. By exploiting the quantized capacitance charging effect, the gold nanoparticles mediate the production of hydrogen without requiring an external power supply, while the MFC produces a stable power density. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Image-adaptive and robust digital wavelet-domain watermarking for images

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Zhang, Liping

    2018-03-01

    We propose a new frequency domain wavelet based watermarking technique. The key idea of our scheme is twofold: multi-tier solution representation of image and odd-even quantization embedding/extracting watermark. Because many complementary watermarks need to be hidden, the watermark image designed is image-adaptive. The meaningful and complementary watermark images was embedded into the original image (host image) by odd-even quantization modifying coefficients, which was selected from the detail wavelet coefficients of the original image, if their magnitudes are larger than their corresponding Just Noticeable Difference thresholds. The tests show good robustness against best-known attacks such as noise addition, image compression, median filtering, clipping as well as geometric transforms. Further research may improve the performance by refining JND thresholds.

  3. The FBI compression standard for digitized fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less

  4. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  5. Magnetic monopole in noncommutative space-time and Wu-Yang singularity-free gauge transformations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laangvik, Miklos; Salminen, Tapio; Tureanu, Anca

    2011-04-15

    We investigate the validity of the Dirac quantization condition for magnetic monopoles in noncommutative space-time. We use an approach which is based on an extension of the method introduced by Wu and Yang. To study the effects of noncommutativity of space-time, we consider the gauge transformations of U{sub *}(1) gauge fields and use the corresponding deformed Maxwell's equations. Using a perturbation expansion in the noncommutativity parameter {theta}, we show that the Dirac quantization condition remains unmodified up to the first order in the expansion parameter. The result is obtained for a class of noncommutative source terms, which reduce to themore » Dirac delta function in the commutative limit.« less

  6. Reformulation of the covering and quantizer problems as ground states of interacting particles.

    PubMed

    Torquato, S

    2010-11-01

    It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d-dimensional Euclidean space R(d) interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in R(d) that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the "void" nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their "dual" solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.

  7. Reformulation of the covering and quantizer problems as ground states of interacting particles

    NASA Astrophysics Data System (ADS)

    Torquato, S.

    2010-11-01

    It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d -dimensional Euclidean space Rd interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in Rd that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the “void” nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their “dual” solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.

  8. Markov Random Fields, Stochastic Quantization and Image Analysis

    DTIC Science & Technology

    1990-01-01

    Markov random fields based on the lattice Z2 have been extensively used in image analysis in a Bayesian framework as a-priori models for the...of Image Analysis can be given some fundamental justification then there is a remarkable connection between Probabilistic Image Analysis , Statistical Mechanics and Lattice-based Euclidean Quantum Field Theory.

  9. Modeling Geometric-Temporal Context With Directional Pyramid Co-Occurrence for Action Recognition.

    PubMed

    Yuan, Chunfeng; Li, Xi; Hu, Weiming; Ling, Haibin; Maybank, Stephen J

    2014-02-01

    In this paper, we present a new geometric-temporal representation for visual action recognition based on local spatio-temporal features. First, we propose a modified covariance descriptor under the log-Euclidean Riemannian metric to represent the spatio-temporal cuboids detected in the video sequences. Compared with previously proposed covariance descriptors, our descriptor can be measured and clustered in Euclidian space. Second, to capture the geometric-temporal contextual information, we construct a directional pyramid co-occurrence matrix (DPCM) to describe the spatio-temporal distribution of the vector-quantized local feature descriptors extracted from a video. DPCM characterizes the co-occurrence statistics of local features as well as the spatio-temporal positional relationships among the concurrent features. These statistics provide strong descriptive power for action recognition. To use DPCM for action recognition, we propose a directional pyramid co-occurrence matching kernel to measure the similarity of videos. The proposed method achieves the state-of-the-art performance and improves on the recognition performance of the bag-of-visual-words (BOVWs) models by a large margin on six public data sets. For example, on the KTH data set, it achieves 98.78% accuracy while the BOVW approach only achieves 88.06%. On both Weizmann and UCF CIL data sets, the highest possible accuracy of 100% is achieved.

  10. Texture-preserved penalized weighted least-squares reconstruction of low-dose CT image via image segmentation and high-order MRF modeling

    NASA Astrophysics Data System (ADS)

    Han, Hao; Zhang, Hao; Wei, Xinzhou; Moore, William; Liang, Zhengrong

    2016-03-01

    In this paper, we proposed a low-dose computed tomography (LdCT) image reconstruction method with the help of prior knowledge learning from previous high-quality or normal-dose CT (NdCT) scans. The well-established statistical penalized weighted least squares (PWLS) algorithm was adopted for image reconstruction, where the penalty term was formulated by a texture-based Gaussian Markov random field (gMRF) model. The NdCT scan was firstly segmented into different tissue types by a feature vector quantization (FVQ) approach. Then for each tissue type, a set of tissue-specific coefficients for the gMRF penalty was statistically learnt from the NdCT image via multiple-linear regression analysis. We also proposed a scheme to adaptively select the order of gMRF model for coefficients prediction. The tissue-specific gMRF patterns learnt from the NdCT image were finally used to form an adaptive MRF penalty for the PWLS reconstruction of LdCT image. The proposed texture-adaptive PWLS image reconstruction algorithm was shown to be more effective to preserve image textures than the conventional PWLS image reconstruction algorithm, and we further demonstrated the gain of high-order MRF modeling for texture-preserved LdCT PWLS image reconstruction.

  11. Fierz bilinear formulation of the Maxwell–Dirac equations and symmetry reductions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inglis, Shaun, E-mail: sminglis@utas.edu.au; Jarvis, Peter, E-mail: Peter.Jarvis@utas.edu.au

    We study the Maxwell–Dirac equations in a manifestly gauge invariant presentation using only the spinor bilinear scalar and pseudoscalar densities, and the vector and pseudovector currents, together with their quadratic Fierz relations. The internally produced vector potential is expressed via algebraic manipulation of the Dirac equation, as a rational function of the Fierz bilinears and first derivatives (valid on the support of the scalar density), which allows a gauge invariant vector potential to be defined. This leads to a Fierz bilinear formulation of the Maxwell tensor and of the Maxwell–Dirac equations, without any reference to gauge dependent quantities. We showmore » how demanding invariance of tensor fields under the action of a fixed (but arbitrary) Lie subgroup of the Poincaré group leads to symmetry reduced equations. The procedure is illustrated, and the reduced equations worked out explicitly for standard spherical and cylindrical cases, which are coupled third order nonlinear PDEs. Spherical symmetry necessitates the existence of magnetic monopoles, which do not affect the coupled Maxwell–Dirac system due to magnetic terms cancelling. In this paper we do not take up numerical computations. As a demonstration of the power of our approach, we also work out the symmetry reduced equations for two distinct classes of dimension 4 one-parameter families of Poincaré subgroups, one splitting and one non-splitting. The splitting class yields no solutions, whereas for the non-splitting class we find a family of formal exact solutions in closed form. - Highlights: • Maxwell–Dirac equations derived in manifestly gauge invariant tensor form. • Invariant scalar and four vector fields for four Poincaré subgroups derived, including two unusual cases. • Symmetry reduction imposed on Maxwell–Dirac equations under example subgroups. • Magnetic monopole arises for spherically symmetric case, consistent with charge quantization condition.« less

  12. Error diffusion concept for multi-level quantization

    NASA Astrophysics Data System (ADS)

    Broja, Manfred; Michalowski, Kristina; Bryngdahl, Olof

    1990-11-01

    The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.

  13. Natural inflation from polymer quantization

    NASA Astrophysics Data System (ADS)

    Ali, Masooma; Seahra, Sanjeev S.

    2017-11-01

    We study the polymer quantization of a homogeneous massive scalar field in the early Universe using a prescription inequivalent to those previously appearing in the literature. Specifically, we assume a Hilbert space for which the scalar field momentum is well defined but its amplitude is not. This is closer in spirit to the quantization scheme of loop quantum gravity, in which no unique configuration operator exists. We show that in the semiclassical approximation, the main effect of this polymer quantization scheme is to compactify the phase space of chaotic inflation in the field amplitude direction. This gives rise to an effective scalar potential closely resembling that of hybrid natural inflation. Unlike polymer schemes in which the scalar field amplitude is well defined, the semiclassical dynamics involves a past cosmological singularity; i.e., this approach does not mitigate the big bang.

  14. Optimal sampling and quantization of synthetic aperture radar signals

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1978-01-01

    Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.

  15. Effect of temperature degeneracy and Landau quantization on drift solitary waves and double layers

    NASA Astrophysics Data System (ADS)

    Shan, Shaukat Ali; Haque, Q.

    2018-01-01

    The linear and nonlinear drift ion acoustic waves have been investigated in an inhomogeneous, magnetized, dense degenerate, and quantized magnetic field plasma. The linear drift ion acoustic wave propagation along with the nonlinear structures like double layers and solitary waves has been found to be strongly dependent on the drift speed, magnetic field quantization parameter β, and the temperature degeneracy. The graphical illustrations show that the frequency of linear waves and the amplitude of the solitary waves increase with the increase in temperature degeneracy and Landau quantization effect, while the amplitude of the double layers decreases with the increase in η and T. The relevance of the present study is pointed out in the plasma environment of fast ignition inertial confinement fusion, the white dwarf stars, and short pulsed petawatt laser technology.

  16. Time-Symmetric Quantization in Spacetimes with Event Horizons

    NASA Astrophysics Data System (ADS)

    Kobakhidze, Archil; Rodd, Nicholas

    2013-08-01

    The standard quantization formalism in spacetimes with event horizons implies a non-unitary evolution of quantum states, as initial pure states may evolve into thermal states. This phenomenon is behind the famous black hole information loss paradox which provoked long-standing debates on the compatibility of quantum mechanics and gravity. In this paper we demonstrate that within an alternative time-symmetric quantization formalism thermal radiation is absent and states evolve unitarily in spacetimes with event horizons. We also discuss the theoretical consistency of the proposed formalism. We explicitly demonstrate that the theory preserves the microcausality condition and suggest a "reinterpretation postulate" to resolve other apparent pathologies associated with negative energy states. Accordingly as there is a consistent alternative, we argue that choosing to use time-asymmetric quantization is a necessary condition for the black hole information loss paradox.

  17. Adaptive robust fault tolerant control design for a class of nonlinear uncertain MIMO systems with quantization.

    PubMed

    Ao, Wei; Song, Yongdong; Wen, Changyun

    2017-05-01

    In this paper, we investigate the adaptive control problem for a class of nonlinear uncertain MIMO systems with actuator faults and quantization effects. Under some mild conditions, an adaptive robust fault-tolerant control is developed to compensate the affects of uncertainties, actuator failures and errors caused by quantization, and a range of the parameters for these quantizers is established. Furthermore, a Lyapunov-like approach is adopted to demonstrate that the ultimately uniformly bounded output tracking error is guaranteed by the controller, and the signals of the closed-loop system are ensured to be bounded, even in the presence of at most m-q actuators stuck or outage. Finally, numerical simulations are provided to verify and illustrate the effectiveness of the proposed adaptive schemes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. On a canonical quantization of 3D Anti de Sitter pure gravity

    NASA Astrophysics Data System (ADS)

    Kim, Jihun; Porrati, Massimo

    2015-10-01

    We perform a canonical quantization of pure gravity on AdS 3 using as a technical tool its equivalence at the classical level with a Chern-Simons theory with gauge group SL(2,{R})× SL(2,{R}) . We first quantize the theory canonically on an asymptotically AdS space -which is topologically the real line times a Riemann surface with one connected boundary. Using the "constrain first" approach we reduce canonical quantization to quantization of orbits of the Virasoro group and Kähler quantization of Teichmüller space. After explicitly computing the Kähler form for the torus with one boundary component and after extending that result to higher genus, we recover known results, such as that wave functions of SL(2,{R}) Chern-Simons theory are conformal blocks. We find new restrictions on the Hilbert space of pure gravity by imposing invariance under large diffeomorphisms and normalizability of the wave function. The Hilbert space of pure gravity is shown to be the target space of Conformal Field Theories with continuous spectrum and a lower bound on operator dimensions. A projection defined by topology changing amplitudes in Euclidean gravity is proposed. It defines an invariant subspace that allows for a dual interpretation in terms of a Liouville CFT. Problems and features of the CFT dual are assessed and a new definition of the Hilbert space, exempt from those problems, is proposed in the case of highly-curved AdS 3.

  19. Five dimensional microstate geometries

    NASA Astrophysics Data System (ADS)

    Wang, Chih-Wei

    In this thesis, we discuss the possibility of exploring the statistical mechanics description of a black hole from the point view of supergravity. Specifically, we study five dimensional microstate geometries of a black hole or black ring. At first, we review the method to find the general three-charge BPS supergravity solutions proposed by Bena and Warner. By applying this method, we show the classical merger of a black ring and black hole on [Special characters omitted.] base space in general are irreversible. On the other hand, we review the solutions on ambi-polar Gibbons-Hawking (GH) base which are bubbled geometries. There are many possible microstate geometries among the bubbled geometries. Particularly, we show that a generic blob of GH points that satisfy certain conditions can be either microstate geometry of a black hole or black ring without horizon. Furthermore, using the result of the entropy analysis in classical merger as a guide, we show that one can have a merger of a black-hole blob and a black-ring blob or two black-ring blobs that corresponds to a classical irreversible merger. From the irreversible mergers, we find the scaling solutions and deep microstates which are microstate geometries of a black hole/ring with macroscopic horizon. These solutions have the same AdS throats as classical black holes/rings but instead of having infinite throats, the throat is smoothly capped off at a very large depth with some local structure at the bottom. For solutions that produced from U (1) × U (1) invariant merger, the depth of the throat is limited by flux quantization. The mass gap is related with the depth of this throat and we show the mass gap of these solutions roughly match with the mass gap of the typical conformal-field-theory (CFT) states. Therefore, based on AdS/CFT correspondence, they can be dual geometries of the typical CFT states that contribute to the entropy of a black hole/ring. On the other hand, we show that for the solutions produced from more general merger (without U (1) × U (1) invariance), the throat can be arbitrarily deep. This presents a puzzle from the point view of AdS/CFT correspondence. We propose that this puzzle may be solved by some quantization of the angle or promoting the flux vectors to quantum spins. Finally, we suggest some future directions of further study including the puzzle of arbitrary long AdS throat and a general coarse-graining picture of microstate geometries.

  20. Event-Triggered Distributed Average Consensus Over Directed Digital Networks With Limited Communication Bandwidth.

    PubMed

    Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang; Zhu, Wei; Gao, Lan

    2016-12-01

    In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.

Top